Mafia II
Image via Wikipedia

In his landmark 1999 essay, “Scientific Communication — A Vanity Fair?” polymath Georg Franck warned readers that our dependence on citation counting could result in a “shadow market,” where journal editors coerce authors into bolstering their citation counts by requiring that unnecessary journal references be added to a manuscript prior to acceptance.

A new article published in Science this week suggests that this fear may have already become a reality, at least for the business and marketing literature.

The article, Coercive Citation in Academic Publishing,” analyzed the nearly 7,000 responses from an online questionnaire sent to researchers in economics, sociology, psychology and business. The researchers were particularly interested in whether editors attempted to coerce authors into citing more articles from their own journal. A coercive self-citation request was defined as a request for more journal citations without providing specific relevant articles or indicating that the manuscript was lacking in attribution.

The researchers report that one in five respondents described being coerced by editors. While the vast majority of respondents (86%) viewed citation coercion as inappropriate behavior, more than half (57%) indicated that they would consent to the request. Not surprisingly, lower-ranked faculty were more likely to acquiesce to coercion. The culture of citation coercion also appears to be much more of a problem in business than economics, sociology, or psychology. A list of the biggest journal offenders are found in supplementary table S12.

The researchers also reported that journals published by commercial presses or societies were more likely to attempt citation coercion than journals published by university presses.

To me, this is the weakest part of the analysis since the authors didn’t distinguish between who runs the operation of the journal and who publishes it. Many scholarly journals are owned and operated by a society who may select a commercial publisher to host it. Even so, scholars are largely in control of the editorial office and review process of a journal, whoever publishes the journal. For the record, Elsevier is the publisher of 4 of the top 5 journal offenders.

Citation gaming is as old as the process of attribution itself, and we shouldn’t be shocked that inappropriate citation behavior is occurring. This study suggests that citation coercion is a problem for a number of journals in business, yet we should be hesitant to conclude that the problem is acute and widespread. While there are clear benefits for a journal engaging in citation coercion, there is also real potential for doing harm.

Most of the respondents in the study felt that citation coercion was inappropriate, depressed the prestige of the journal, and reduced the likelihood that authors would engage with that journal in the future. Thomson Reuters also monitors self-citations when calculating a journal’s impact factor, and may delist a journal when self-citation rates become too high or changes the relative ranking of a journal within its field. No editor wants to be known as the one who put the journal in “time out.”

In sum, the results of this study put real numbers to what many have reported anecdotally over the years. It also points to errant behaviors that may be more acceptable in some fields than others. Any variable that is used to evaluate science is susceptible to manipulation. Developing tools to detect, report, and hold accountable those who view citations as little more than marketing tools is critical for those of us who still view the citation as a tool for scholarly attribution.

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist.


24 Thoughts on "When Journal Editors Coerce Authors to Self-Cite"

Phil, what do you make of the “long tail” on that table you cite? By my rough count, of the 134 or so journals identified as coercers by survey respondents, 52 had been so identified by only one survey respondent (possibly in only a single incident). That’s out of nearly 7,000 survey responses, as you note.

While clearly there’s what looks like a pattern for the journals at the top of the list, I wonder how much of a conclusion you can draw on journals identified for one infraction. Or am I just misreading the data? (Wouldn’t be the first time . . .)

The survey asks respondents to recollect events that happened in the last five years so there could be some recall bias going on. Respondents could also be inaccurately attributing a coercive request with another similar titled journal or misinterpreting the request entirely. Taken together, I’m not sure what to make of the long tail, only that it may reflect several cognitive biases and errors of attribution. When several of the titles get independent confirmation from scores of authors, however, I think there is something suspicious taking place.

Phil, I’ve had a chance now to speak with others who’ve been with AOM much longer then I. I’ve no doubt there is “recall bias” and/or some misinterpretation etc. going on with this survey. I cannot speak for the other journals listed by the authors, but I personally find their results highly questionable.

Adam, I’m glad that you’re trying to verify the results, but unless you’re going to accuse the authors of making up their data, I think you have to offer another explanation.

Phil, I’m not accusing the authors of making anything up. I think the problem with this survey is that there is too much of a possibility that recall bias did play a role here. I’m fortunate in that I can contact folks involved in a couple of the journals listed by the survey over the last several years and ask them about this. Some were already aware of the survey and were shocked to see our journals listed at all. Is it possible they’re being dishonest? Sure. I don’t believe that’s the case. I don’t think the respondents are lying either. If any of us had the ability to contact the respondents and ask them if they had email trails or something other than their memory to “verify the results” that editors attempted to coerce them into bogus self-citations, I’d feel more comfortable accepting the data. I know that’s not possible or even realistic. This survey shows there is a PERCEPTION that this practice exists (a perception I’ve stated that I share), but I don’t see any real proof. I did not mean to get into a debate or inflame anything. I will say that if anyone ever has a current or future Editor of an Academy of Management journal engage in this behavior they can contact me directly and if it is found that the accusations have merit things will be dealt with accordingly.

This is more than anecdotal! I have found an example were an editor-in-chief of an ISI-indexed journal in “health sciences” urges reviewers to make sure submitted manuscripts include self references to the same journal. In the instructions for reviewers, there is a numbered list describing requirements about the length of the manuscript, number of words in the title (a maximum of 12), number of keywords (max 10), and a comment that the keywords should reflect the words in the title and appear in the abstract. But what stands out in the list was the following passage:

“5. Manuscript should refer to at least one article published in ‘*** Journal of *** Sciences’ [the title of the same journal]”

One could assume that this is expected to be a reference to an article that has some bearing on the manuscript at hand, but it is not stated. On this issue, it seems like the question of self citations has been around for a long while and to me, the biggest concern seems to be how to distinguish whether a specific instance of journal self citation is a valid one or not. The question “What is appropriate self referencing” is not as clear cut as it could seem. Given that there could be a variety of reasons for citing a specific reference, it could very well be a grey zone here, where no clear answers could be given.

Just curious – why redact the name of the journal? Do they require authors to sign a non-disclosure agreement about their instructions for authors? It’s impossible to either check on this or to pressure the journal to stop if it’s not named.

Thankfully this has never happened to me over a science career of ~80 publications to date, and I have never heard a colleague complain about this, so cannot believe this is a widespread phenomenon in biochemistry/cell biology/molecular neuroscience fields. However the post leads me to wonder if editors engaged in this type of practice could evade the Thompson-Reuters sanctions for excessive self-citation by reciprocal cross-citing between related journals belonging to the same publisher…?!

I haven’t experienced it either, thankfully. One may expect this kind of behavior by editors of second-tier journals that desperately want to be recognized as a top-tier. One could also be cynical about the real contribution of journals in business, marketing, and finance, but the fact that citation coercion is a normative behavior for multiple titles in these fields may say something about how academics in these fields perceive the role of scholarship.

As for “reciprocal cross-citing between related journals belonging to the same publisher,” we should be on the lookout of what Georg Franck called citation cartels. I don’t see an easy mechanism for publishers to do this however, as editorial control is divided up into different silos. I do see a potential problem when managing editors oversee a collection of journals managed by the same publisher. Still, if the editor is doing his/her job, the managing editor shouldn’t have much influence here. The locus of behavioral control is centered around the academic editor, and here is where the risks are taken and the rewards are reaped.

There are ways of accumulating citations that have little to do with scientific value. The simplest way of circumventing the hurdle of productivity enhancement is the formation of citation cartels.

Franck, G. 1999. Scientific Communication–A Vanity Fair? Science 286:53-55.

Mike, I asked Al Wilhite, the senior author of the paper, whether there was any evidence of “citation cartels” among the responses to his survey. He responded that there wasn’t, and that from a coordination standpoint, self-citation would be much easier.

I have heard of two math journals with overlapping editorial boards that practice this behavior. This behavior is greatly supported by the fact that the editors publish their manuscripts in their own journals.

Thanks Phil, very interesting. I was not aware of the term previously, but am willing to bet that ‘citation cartels’ might not be so rare at the individual author level, and indeed might arise almost spontaneously…

Let me start by stating that I am most definitely not a researcher or scholar, so my understanding of some of this is limited. Also, as a matter of full disclosure, I’m employed by Academy of Management (for about a year now) and a couple of AOM journals appear on this list (albeit with numbers pretty low when all is considered). I can’t comment on what may or may not have occurred prior to my joining AOM, but I am VERY confident this behavior is not being practiced by our current group of editors. Some observations and questions:

1. I’ve no doubt some publishers and editors encourage and engage in this behavior. I once questioned someone who was advocating this type of self-citation practice and pointed out it was unethical. The response was basically “Well everyone else is doing it.” Wink.

2. What year was this survey taken?

3. Why did the authors focus on economics, sociology, psychology and business? I’d be curious to see the same study done in the fields of medicine, biotechnology etc.

4. Regarding the long tail, it seems to me that after journals 4 or 5 there’s a steep drop off in the number of supposed instances of this behavior, isn’t there?

Yes, Thomson-Reuters has their “naughty list” but I wonder if that’s enough? I’d be interested to hear thoughts on a system where authors could report this type of behavior to TR in order to potentially keep publishers/editors accountable for their actions. I don’t accept the “everyone else is doing it” attitude from my kids and think the least the STM publishing field can do is hold themselves to the same standards.

Adam, I’ve looked over the methods section and don’t see the actual survey date–perhaps it’s in the 42 pages of supplementary methods and tables. The survey asks for participants to recount on the previous 5-years. This is a long time. If the practice of coercive citation has stopped recently in Academy of Management journals, this study would not be able to discern that.

I imagine that Thomson Reuters doesn’t want to get into the business of playing citation cop. Imagine the position this would put them in: Someone issues a complaint; the journal editor claims the complaint is baseless (or the result of a simple misunderstanding), and Thomson Reuters is responsible for adjudication. Now add the possibility of a lawsuit.

On the other hand, Thomson Reuters currently takes action and sanction journals (put them in “time out“) when they show a strong pattern of self-citation. Where and when to draw the line of acceptable behavior is largely arbitrary.

My own feeling is that scientific communities form their own normative behaviors when it comes to publication and citation. The biomedical sciences, for example, has created very strict guidelines for disclosure of source of bias and potential conflicts of interest. Even without explicit guidelines, individuals can change their behaviors when being called out in public for unacceptable behavior, such as when an editor engages in inappropriate self-citation.

In the past I have been asked to add some references from a journal that I submitted a paper to, but after acceptance. Also recently had a paper rejected by an editor who confirmed the decision based on the lack of references to that particular journal – see below:

—–Original Message—–
From: On Behalf Of Separation & Purification Technology
Sent: March-22-12 3:56 PM
To: Chris Pickles
Subject: Editor Decision – Reject SEPPUR-D-12-00429

Ms. No.: SEPPUR-D-12-00429
Title: A Study of the Reduction and Magnetic Separation of Iron from a High-Iron Bauxite Ore Corresponding Author: Dr. Christopher Adrian Pickles All Authors: Christopher A Pickles, Ph.D.; Ting Lu, M.A.Sc.; Brandon Chambers, B.A.Sc.; John Forster, B.A.Sc.

Dear Dr. Pickles,

Thank you for your submission to Separation and Purification Technology. Unfortunately, the Editors feel that your paper is not suitable for publication in the journal and unlikely to be favourably reviewed by the referees. Main reason is that the manuscript seems more relevant for publication in another journal, which is confirmed by the lack of references to papers published in Separation and Purification Technology. We suggest you consider submitting the paper to another more appropriate journal.

Thank you for your interest in Separation and Purification Technology.


André B. de Haan
Separation and Purification Technology

I received a very similar emails from Transportation Research Part B (Elsevier journal). The most recent one I got 16 hours after submission – this merely gives enought time to check to references.

Ms. Ref. No.: TRB-D-12-00150
Title: Hours of service regulations in road freight transport: an optimization-based international assessment
Transportation Research Part B

Dear Dr. Asvin Goel,

Your paper has now gone through our initial review process. This is part of the journal’s refereeing approach that initially reviews papers for suitability for publication in Part B. The idea is that such initial reviewing could save authors and referees significant amounts of time.
The results of this process indicate that your manuscript could be potentially interesting to a number of journals but, unfortunately, the paper was not deemed to be a good fit with Part B. There was concern that the topic area of your paper did not make Part B an obvious forum for your work (this is reflected to some extent by the fact that your manuscript does not refer to any Part B papers). This is not a criticism of your paper; it simply suggests that other journals are more appropriate. Transportation Science and traditional operations research journals were thought to be possible outlets for your work – other journals may be appropriate as well.
I should also mention that much of Part B’s policy with regard to considering manuscripts for full review is motivated by the large number of submissions (less than 15% of the submitted manuscripts are eventually published). This makes the competition for journal space intense and requires us to be very selective. At the expense end, we must turn away many solid pieces of work that normally would make fine contributions.
Please understand that in declining your manuscript for Transportation Research Part B we intend no reflection on the overall quality of your work. I certainly hope that you will be successful in placing the manuscript in a more appropriate journal and that you will consider Part B as an outlet for your work in the future.

Fred Mannering
Transportation Research Part B

Interesting to see Transportation Research B mentioned here. This seems to be a problem in other transportation engineering journals as well. I am currently finalizing a paper for a prestigious journal published by Wiley Blackwell. From the very first submission I have been very strongly pressured to include citations from the journal, and in particular, papers authored by the editor, who has been particularly brazen – even going so far as to send me lists of his papers and books. Initially he requested that I include papers from a list of those recently published in the journal, which, after reading as many as possible, I did. I found some that are tangentially relevant to my topic and included those. My paper then successfully went through the peer-review process and was accepted for publication, but I was requested, again, to “update” the reference list to include more recent papers from the journal. I did not, since I had already exhausted the possibilities for legitimately citing papers from this journal. However, after reformatting the paper to match the journals format, I was once again sent a list of recent papers to include in my reference list. After the initial excitement of being accepted for publication in such a highly regarded journal (impact factor 3.4 – which is huge for my field), I am now wondering whether I still want to publish in the journal at all…

Phil- Enjoyed hearing you speak in San Diego this January. I am afraid this practice occurs in chemistry. One new journal that I monitor started with an amazingly high impact factor. I compared the self-citation rate of this “amazing new journal” (20% according to ISI) with the second rated journal in the category (4%), and one can see how they have gamed the system to boost their impact factor. This combined with other scientists telling me the suspect journal has requested citations to the journal during the revision process mark the journal in my mind as unethical. To this end, I do not submit anything to this ethically challenged title, and encourage my colleagues to boycott them as well.

Comments are closed.