Domino tiles
Domino tiles (Photo credit: Wikipedia)

From an economics standpoint, self-citation is the easiest method to boost one’s citations. Every author knows this and cites his own articles, however peripheral their relationship is to the topic at hand. Editors know this as well, and some have been caught coercing authors into self-citing the journal. Other editors have published editorial “reviews” of the articles published in their own journal, focusing entirely on papers that have been published in the previous two years — the window from which the impact factor is generated.

There is a price to pay for this behavior, especially when it is done to excess. Thomson Reuters, publishers of the annual Journal Citation Report (JCR), routinely puts journals in “time-out” when their self-citation rates are excessively high, such that they greatly shift the journal’s positional rank among other related titles.

There is another citation gaming tactic that is much more pernicious and difficult to detect. It is the citation cartel.

In a 1999 essay published in Science titled, “Scientific Communication — A Vanity Fair?” George Franck warned us on the possibility of citation cartels — groups of editors and journals working together for mutual benefit. To date, this behavior has not been widely documented; however, when you first view it, it is astonishing.

Cell Transplantation is a medical journal published by the Cognizant Communication Corporation of Putnam Valley, New York. In recent years, its impact factor has been growing rapidly. In 2006, it was 3.482. In 2010, it had almost doubled to 6.204.

When you look at which journals cite Cell Transplantation, two journals stand out noticeably: the Medical Science Monitor, and The Scientific World Journal. According to the JCR, neither of these journals cited Cell Transplantation until 2010.

Then, in 2010, a review article was published in the Medical Science Monitor citing 490 articles, 445 of which were to papers published in Cell Transplantation. All 445 citations pointed to papers published in 2008 or 2009 — the citation window from which the journal’s 2010 impact factor was derived. Of the remaining 45 citations, 44 cited the Medical Science Monitor, again, to papers published in 2008 and 2009.

Three of the four authors of this paper sit on the editorial board of Cell Transplantation. Two are associate editors, one is the founding editor. The fourth is the CEO of a medical communications company.

In the same year, 2010, two of these editors also published a review article in The Scientific World Journal citing 124 papers, 96 of which were published in Cell Transplantation in 2008 and 2009. Of the 28 remaining citations, 26 were to papers published in The Scientific World Journal in 2008 and 2009. We are beginning to see a pattern. This is what it looks like:

satellite citation
Cited References from Park, DH et al (PMID: 20305989). Cited articles are color coded to reveal diversity of journals referenced.

The two review articles described above contributed a total of 541 citations toward the calculation of the Cell Transplantation‘s 2010 impact factor. Remove them and the journal’s impact factor drops from 6.204 to 4.082.

The editors of Cell Transplantation have continued this practice through 2011, with two additional reviews: The first appears in the Medical Science Monitor, with 87 citations to Cell Transplantation and 32 citations to the Medical Science Monitor, all to papers published in 2009 or 2010. The second review appears in The Scientific World Journal, containing 109 citations to Cell Transplantation and 29 citations to The Scientific World Journal, all of which were published in the same two-year window from which the 2011 impact factor score will be derived.

In 2011, the editors of Cell Transplantation also published a similar review article in their own journal, citing a smaller sister journal, Technology and Innovation, 25 times — 24, of which were published in 2010. The remaining references cite Cell Transplantation papers published in 2009 or 2010. The lead author of the article is the Editor-in-Chief of Technology and innovation; the last author is co-editor of the journal.

From a strategic standpoint, placing self-referential papers in a cooperating journal is a cheap and effective strategy if the goal is to boost one’s impact factor. For an article processing fee of just $1,100 (Medical Science Monitor), the editors were able to return 445 impact factor-contributing citations to their journal. Best of all, this kind of behavior is difficult to track.

The JCR provides citing, and cited-by, matrices for all of the journals they index; however, these data exist only in their aggregate and are not linked to specific articles. It was only seeing very large numbers amidst a long string of zeros that I was alerted to something odd going on — that and a tip from a concerned scientist. Identifying these papers required me to do some fancy cited-by searching in the Web of Science. The data are there, but they are far from transparent.

The ease to which members of an editorial board were able to use a cartel of journals to influence their journal’s impact factor concerns me greatly because the cost to do so is very low, the rewards are astonishingly high, it is difficult to detect, and the practice can be facilitated very easily by overlapping editorial boards or through cooperative agreements between them. What’s more, editors can protect these “reviews” from peer review if they are labeled as “editorial material,” as some are. It’s the perfect strategy for gaming the system.

For all these reasons, I’m particularly concerned that of all the strategies unscrupulous editors employ to boost their journal rankings, the formation of citation cartels is the one that could do the most harm to the citation as a scientific indicator. Because of the way it operates, it has the potential of creating a citation bubble very, very quickly. If you don’t agree with how some editors are using citation cartels, you may change your mind in a year or two as your own title languishes behind that of your competitors.

Unlike self-citation, which is very easy to detect, Thomson Reuters  has no algorithm to detect citation cartels, nor a stated policy to help keep this shady behavior at bay.

One way to detect an offending paper would be to look at the share of impact factor-contributing references directed toward a single journal. Computationally, this may be the easiest step. Determining how much influence is excessive and under what circumstances will ultimately be the biggest challenge.

Science editors need to discuss how to deal with this issue. If disciplinary norms and decorum cannot keep this kind of behavior at bay, the threat of being delisted from the JCR may be necessary.

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist.


64 Thoughts on "The Emergence of a Citation Cartel"

Outrageous. Thanks for hunting this out. I hope you can get some action taken.

Who I really feel for: authors who were attracted to Cell Plantation by its apparently healthy Impact Factor — perhaps feeling that they needed an IF=6.204 journal on their CV to advance a tenure case or grant proposals.

Obviously corollary: we as a community really need to stop trusting that Impact Factor means anything at all. If we must count citations, let’s at least count citations of individual articles rather than of the journals that they happen to appear in.

Mike, is this really the fault of the Impact Factor or the failure of editors from preventing “reviews” like this from being published? Or, is it the failure of the scientific community for their failure to detect this kind of behavior early on? There seems to be a quick response from some to blame the metric when the metric is not the problem but those who attempt to game the metric.

I reject the “or” in that question. It’s the fault of all of these things.

The existence and reification of an easily games metric certainly doesn’t helps matters.

That is the “guns don’t kill people …” argument. Of course the metrics are not to blame, the people are. But that is hardly the point.
There are huge potential rewards to gaming the metric, and relying solely on an easily-gamed metric to measure the impact of scientists and scientific journals is dangerous. For better or worse, the use of metrics is here to stay. Fixing/replacing existing metrics so that they’re not so easily gamed is important.
Thank you indeed for digging up this story.

I’m not sure I’d characterize the metric as “easily gamed”. Having to put together a covert cabal of unscrupulous editors from unrelated journals and writing review articles that must pass public muster seems like a lot of hoops to jump through. It’s certainly a lot more difficult then setting up a bot to download a paper a few thousand times to game a usage metric. And since the impact factor is overseen by a private company that does dole out penalties to those found doing shenanigans like this, there is at least some means of correcting the system when gaming occurs.

None of which should be read as an endorsement of the way the impact factor is currently used. If we instead think on an article-level basis, then this becomes a less useful gaming method. You’d have 445 individual papers with one citation each, rather than a journal with 445 citations that mattered.

David, I take your point. I’m sure it was not easy to set this up – though I’m not sure how covert it all was. My point however, which is that the rewards to increasing the IF of one’s journals are enormous, still stands. And while Thomson-Reuters may punish this sort of shenanigans, given the payoff we have to expect and try to prevent this sort of things.
Sure, it would also be great if everyone did the research Phil bothered to do (or at least everyone that made funding/hiring decisions), but I won’t be holding my breath.

Amazing! Editors can form a citation cartel and play up the system. Can’t believe editorials are still being counted as citations

Thank you for digging into this story. I wish policy makers were more aware of these problems: they would realize the urgency of an impact-factor independent system to judge scientist works.

Arguably, might it not actually be good to subvert and thereby discredit and hopefully ultimately trash all mechanical impact measures?

Simple metrics beget gaming. Surprise, surprise.

If the community manages to detect and prevent this sort of abuse in the future, another game will take its place. The only solution I can see it to abandon or minimize “impact factor” as a means of evaluating journal quality.

A nice article, by the way. Thanks.

I’m amazed Thompson Reuters isn’t checking for this. They have all the data on cites, and the year-to-year changes here are startling. Don’t they have database crawlers trolling through their system to find oddities like this? Seems like something any IT student could write. Also, why doesn’t Thompson Reuters track editorial boards, to flag journals for investigation?

It seems the system is built on trust, but has moved to the point of “trust but verify.”

Thomson Reuters is aware of the citation practices of CELL TRANSPLANTATION. The journal came to our attention following the release of the 2010 JCR data. Earlier this year, we communicated our findings to the Editorial Board to allow them to comment on their practice.

In response, we are now developing a method to detect similar behavior across any other of the 10,000+ journals in the JCR. This will allow us to apply an objective standard and consistent policy to determine when a calculated Journal Impact Factor is not an accurate representation of the citation use of the journal by the broader literature.

A very similar method was applied nearly 10 years ago to the problem of journal self-citation. We have now an established practice both for the very clear reporting of how journal self-citation contributes to the calculated value of the Journal Impact Factor, and for the decision to remove a journal from the JCR for excessive self-citation.

The completeness and the mutual availability of data published in the JCR, and in Web of Science allowed this unusual pattern of publication/citation not only to be surfaced, but to be traced to specific articles and authors. The simplicity of the Journal Impact Factor calculation, along with the detailed data surrounding it in the JCR and Web of Science makes “gaming” easier to detect.

We do not police the journal publishing world – but we provide visibility.

Thanks, Phil for opening this up to discussion.

What’s more, editors can protect these “reviews” from peer review if they are labeled as “editorial material,” as some are. It’s the perfect strategy for gaming the system.

Is this true? If I am correct, editorials are not citable items, so the references in editorial material should not be counted. Citable items are generally those sort of articles that would be peer reviewed; but I might be wrong?

Rachel, these “reviews” are the source of the citations, not the target.

Hi Phil. What I mean is: A Review Article would be peer reviewed; Editorial Material normally is not. So if some journals are gathering (self-)citations by adding references to a Review but labeling it as an Editorial to circumvent peer review, then it should not work, because Editorials are not citable items and the refs would not be counted. It does work if the (self-)references are published in a Review Article, which are citable items, but then it would be expected to be peer reviewed (which is also easily manipulated of course).

My point is if these fake Reviews are labelled as Editorials to circumvent peer review, then the refs would not count by being in a non-citable item.

While you are correct about the calculation of the denominator of the Impact Factor, you are incorrect about the calculation of the numerator. I’ve included a description below from Marie McVeigh, director of the JCR and Bibliographic Policy at Thomson Reuters, who published a detailed article on this topic in Learned Publishing [1]. The article is freely available:

The numerator of this ratio considers the journal as a whole, and includes any citations to the journal title; thus it depends on accurate and complete aggregation of citations to a journal title. The denominator considers the journal as a collection of items that are likely to influence the scholarly literature, by way of citation; thus only the scholarly or ‘citable’ items in the journal are included.

[1] Hubbard SC, McVeigh ME. 2011. Casting a wide net: the Journal Impact Factor numerator. Learned Publishing 24: 133-7.

It’s ridiculous but, yes, it’s true. A now-classic 2006 PLoS Medicine editorial, The Impact Factor Game, discussed the mutability and negotiability of Impact Factor in some detail as that journal’s own first impact factor was impending. Key quote:

“During the course of our discussions with Thompson Scientific, PLoS Medicine‘s potential impact factor— based on the same articles published in the same year—seesawed between as much as 11 (when only research articles are entered into the denominator) to less than 3 (when almost all article types in the magazine section are included, as Thomson Scientific had initially done—wrongly, we argued, when comparing such article types with comparable ones published by other medical journals).”

From Rachel – and generally true: “A Review Article would be peer reviewed; Editorial Material normally is not.” You are citing an Editorial. We have published a peer-reviewed description of the development of the Journal Impact Factor denominator:

The denominator of the Journal Impact Factor is not negotiated.

Good – awesome research here. I’m glad some hard-working people are calling out fraud in this citation-obsessed world. Maybe we’ll get our egos and shitty scientometrics out of our arses and get back to focussing on the science. Citation rates should have little relevance anyway in a world where we are known to our peers and we can PubMed or Google Scholar search for any articles we need.

The analysis must be extended to investigate how many % of cartel activity of unethical editors reviewers (and not just in journals but on grant panels and others are related to Chinese nationals where such activity is accepted and standard). Unethical editors and reviewers no matter what their nationality is are basically screwing the science to the point which makes publication efforts requiring so much energy that DOES NOT WORTH THE INVESTMENT.

Get a patent instead.

It appears SEO techniques (in particular link farming) have invaded academia.

The irony is that the PageRank was originally inspired by citations in academic publications. SEO’s learned how to “game” Google’s algorithms, and now their techniques seem to inspiring questionable practices in academia!

SEO “invaded” academia years ago, by the way. It’s nothing new to academic publishers or universities.

Another subtle version of citation kartel has been descirbed in

Mannino, D.M. (2005). Impact Factor, Impact, and smoke and mirrors. American Journal of Respiratory and critical care medicine 171(4): 417-418

” authors invited to resubmit articles to AJRCCM receive the following statement: “In revising your manuscript, please update your references to ensure that you discuss your work in the context of the most recent research in this area. In particular, you need to compare your references against the articles in Year in Review 2002 and 2003 accessible through the File-Cabinet icon on the Homepage of AJRCCM.”

When I hear the news, I am very angery! It is very unfair, they punish the journal and also punish the authors. We worked hard to revise the paper and paid the money for publish. Then the impactor factor of the journal disappered??!! The journal cheated our authors. This summer, I need counted the CJA points to upgrade my scholar position. I am very sad because I will lose 60 points of this journal! It’s really hurt me!

This is really funny. Though, if we will all be sincere, almost all journals promote self-citations and co-operating citations in a way. Even, individuals do. The impact factor (IF) system has generated so much controversies and has increased the rate of malpractices in scientific publications. IF is no longer used as an index of journal performance, but has a measure of evaluating articles and scientists/researchers. Journals now use IF as a form of promotion to get more articles for publication.
The metric and its use certainly led to this act.
I’m sure a chunk of journals out there does the same, though maybe in a milder form.
I really do not see the need for this metric system. We should rather find a way to promote quality manuscripts evaluation and peer-review processess.

I would like to support your opinion!

I am the reviewer of few articles in diffesrent journals. And I am NOT able to find ANY rules according to the topic!!!

First of all I found OUT, that:

1. Papers questioned by Thomson-Reuters are meta-analysis in nature. The question is: does Thomson-Reuters not allow to publish meta-analytic papers from one journal? If they do not allow it would be a bureaucratic integration in authors freedom. There is no such a rule published by T-R anywhere.


2. T-R is a global scale monopolist in the journals bibliometrics, yet hasn’t published:
– inclusion criteria
– exclusion criteria (e.g. what is the limit of “self-citations” for a journal? how many citations Thomson Reuters consider excessive in one article?)
– suspension criteria
– ethical rules
– unacceptable practicies criteria

Thomson Reuters does not provide analytical evidence/background of suspension and/or exclusion to the journal. There is no warning to the journal that it approaches exclusion criteria (!!!!!!)
T-R decisions are purely arbitral and subjective in nature.
No regulations publicly available for publishers (or reviewers):

There is no warning procedure!!!!
Thomson Reuters does not take into consideration explanations of the Editors
Journals have to “prove innocent” otherwise they are consider “guilty”

All journals which published articles by the authors associated with the Cell Transplantation editorial board have been stigmatized “participants” in the impact factor manipulation process and automatically suspended from JCR. However it is very easy to find out how the impact factor of the other “participants” behaved and Thomson-Reuters has all the analytical tools to check it. They didn’t do it. They did not provide objective numerical analysis of the Medical Science Monitor IF behavior, which would be a perfect indicator of “conscious manipulation” of impact factor calculation.

FOR example the Medical Science Monitor (I was alse reviewed few articles for this journal) a label “participant” is a pure abuse of the term!!!! Indeed, MSM published two articles of authors associated with the Cell Transplantation: one in 2010 and another one in 2011. Indeed, from the pure numerical analysis it does not look good as the first article had over 400 citations (most of them from Cell Transplantation) and the second one 154, the number which has been driven down from 519 in the process of peer-review. The titles of both articles were clever enough to indicate scientific statistical analysis of the trends in the field based on articles published in the Cell Transplantation. Both articles have undergone a standard peer-review process in which reviewers voted for publishing. At the time of publication it did not occur to us that this publication might represent a bad editorial practice and we decided to publish the articles in a good faith after a positive comments from two independent reviewers, one from Taiwan and another one from Europe. We did not requested “a reciprocal” publication in Cell Transplantation at any time, and we did not publish in that journal at all.
Looking up retrospectively, indeed, we were involved in this process but victimized, and did not participate consciously in this activity. The impact factor of the Medical Science Monitor has been behaving rather stable with just a minimal increase or decrease in recent years. The decision of suspending Medical Science Monitor from JCR victimizes also the authors of the Medical Science Monitor. Some of them (young doctors) are loosing even job in Poland!!!!

Article Bibliography list:

1. What should be the limit of the citation list the authors are allowed to provide without being accused by Thomson-Reuters of fraudulent practices? And how this corresponds to the independence of editors and freedom of authors?
2. Journals publish scientific reports which are aimed at being read and cited. Why it is not welcome by Thomson-Reuters that the authors cite previous articles published by the journal they submit to (so called self-citations)? The assumption of ill-behavior is a pure prejudice.

Thomson-Reuters has a right to protect its products, here the impact factor, from being artificially distorted or otherwise influenced from the outside. However, Thomson-Reuters has also power to destroy journals, destroy their scientific reputation and financial strength. Thomson Reuters JCR list influence directly scientific careers as they are being evaluated annually based on their publications. Therefore any actions against the journal should be preceded by a process of true investigation of the problem and presentation objective proof of guilt. The journals should have the right to defend themselve. We believe that potential victims (journals involved accidentally and unintentionally) should be identified and spared. Thomson-Reuters has all the means and tools at its hands.

Good Publication Practice Committee
66 East 79 Street,
New York, NY 1002
(2012) 288-6010; fax (2012) 288-6024

New York, 15th July, 2012

Marian Hollingsworth
Director, Publisher Relations
Thomson Reuters

Dear Ms Hollingsworth

As Deputy Chairperson of the Good Publication Practice Committee, which as a socially active independent group of scientists I, in conjunction with my team of colleagues, have decided to drawn up this letter to turn your attention to the uncleared procedures of Thomson Reuters and its widespread, real consequences to the international scientific community.

I am writing especially in regard to the decision of suppressing Medical Science Monitor and withdrawing its Impact Factor score from the JCR 2011 report.

It is our understanding your decision was based on the analysis of publications by a group of authors in Florida who mis-used the reference section in their papers for self-promotion. It is clear to us that Medical Science Monitor is the victim here rather than “a participant” and should not be punished for the behavior of certain contributors.

I’d like to emphasize that we understand and fully support the Thomson-Reuters efforts to deliver true and undistorted citation data to the scientific community.

Taking the Medical Science Monitor case I would like to point out to you several important general issues associated with current practices of Thomson Reuters which, in our opinion influence transparency, fairness and balance between Thomson Reuters and evaluated journals.

1. Clear statement on allowed / disallowed type of articles. The questioned articles published by the Medical Science Monitor in 2010 and 2011 were meta-analysis in type. In order to avoid unclear situations we would like to know your position the following issues:
a. is it acceptable for Thomson-Reuters that journals publish meta-analysis papers?
b. does Thomson Reuters disallow meta-analysis from single journals?

2. Literature list: Scientific discussion and research progress are based on citation of one scientist by another. It is common practice that the authors cite their own works published earlier in order to back up their new thesis. Likewise, authors also cite previous works published in the journal they submit the article just to show the continuum of the scientific work. Our questions are:
a. does Thomson-Reuters policy sets up limits to the citation list at the end of article? If yes, what are the limits?
b. what is the limit of self-citations allowed by authors to include into the literature list considered “fair” by Thomson-Reuters?
c. If there are limits set up by Thomson-Reuters, where are they available publicly to the journals, since we have not been able to locate such instructions on the Thomson-Reuters web service?

3. The need for clear and fair policy. Thomson-Reuter’s JCR reports are used worldwide not only by librarians, for whom the impact factor has been designed. Regardless of the Thomson-Reuters stand, the journal’s impact factor has been widely adopted in the annual evaluation process of scientific activity of individual researchers as well as institutions. Therefore implications of the Thomson-Reuters decisions are much more widespread and profound then could be expected. Authors of suspended journals are not being promoted or loose their jobs, institutions are degraded and loose part of government funding. Setting up clear and fair policy of interaction between Thomson-Reuters and journals is the issue of the highest priority. Without such a policy any decision made by Thomson-Reuters will be regarded as arbitrary, anecdotal and subjective. The policy should especially address and regulate the following issues:
a. inclusion and exclusion criteria to/from the JCR and other Thomson-Reuters products
b. clear limits of allowed citations / self-citations considered by Thomson-Reuters fair and non-manipulative
c. ethical rules and unacceptable practices criteria for journals
d. warning procedures
e. investigation standards in case of discovering potential manipulation practice by journals.

We are ready to collaborate with you to set up these standard procedures which will help Thomson Reuters leading scientific community with transparency and fairness. We understand that this process will take time, therefore we propose temporary implementation of basic fair procedures, such as:

1. Warning notifications if potential manipulation activity has been discovered. This will allow journals to work hand-in-hand with Thomson Reuters to eliminate the threat of an artificial distortion of the impact factor and other indicators.
2. Not taking into calculation citations from questionable articles until the issue has been clear out.
3. Implementation of retraction requests of the questionable articles.

We trust in the high standards and work ethics of Thomson-Reuters Staff and Executives. Therefore we cannot agree with the arbitrary and subjective decisions such as suspension of the Medical Science Monitor not based on clear terms and conditions, without warning and true investigation of the issue.

Let me point out that in our opinion there is no evident conflict of interest on the part of the journal editors and staff, who were unaware of this apparent violation of some rules, nor were they informed of the rules that were to be followed. Your decision has caused a great upheaval in certain fields of academic research, and it seems to me it is quite unfair to penalize the journal for the mistakes of some contributors.

For these reasons, we would hope you can reverse your decision regarding the inclusion of Medical Science Monitior in the 2011 JCR, and that it can be handled in an amicable way.

I‘d also like to inform you that the editors of the Medical Science Monitor have already retracted the alleged fraud articles.

I am looking forward to your reply,
Sincerely yours,

Prof. Maria M. Pachalska, M.D., Ph.D.
Good Publication Practice Committee

Just for clarification, the address and phone number you provide for the Good Publication Practice Committee is for a Jason W. Brown, MD, professor of neurology at NYU School of Medicine ( The email address, however, appears to be your own in Poland, which coincidentally, is the same email address specified for another commenter to this post, Prof. Boguslaw Franczuki, M.D.

Would you please clarify?

FYI, the proper area code for New York City is 212 and not 2012.

i have made publications of my ten years experienses in physiptherapy, so I chose med sci monit because ow high impacf factor it is important for my scientific carrier, I would like to support opinion and activity of prof. M. Pachalska

Good Publication Practice Committee
66 East 79 Street,
New York, NY 1002
(212) 288-6010; fax (212) 288-6024

New York, 17th July, 2012

Dear Phil,

Thank you for writing such an important article as well for your prompt response. Indeed, the phone number and street address for the correspondence of our Good Publication Practice Committee is the one for Professor Jason W. Brown, the Chairman of the Committee. We have his consent to use his address simply for matters of convenience.

I contacted you via our public e-mail address ( which is the most convenient for me, as one of the Deputy Chairmen of the Committee. A Committee composed of many eminent professors from around the world.

As I mentioned earlier, we are a group of independent scientists interested in good and fair publication practices. We have been active on the pro publico bono principle and use our small individual budgets when needed, but this should not discredit our activities and/or our intentions, which are entirely honorable.

I would rather like to hear your comments as to the important issues regarding the lack of clear Thomson-Reuters policies available for journals, which I pointed out in my commentary. In my opinion this is a serious issue, since the lack of such rules results in the decision of Thomson-Reuters being not only arbitrary, subjective and unjustified, but also (potentially) illegal.

One thing here is for sure: the reputation of Thomson-Reuters as an ethical, fair and objective scientific institution has been severely damaged on the basis of this single act alone. It is very easy to destroy the reputation of a journal through careless public allegations, like the one published within the framework of Scholarly Kitchen, which could be contrived to suggest the involvement of the Medical Science Monitor in the doubtful practices of a particular group of authors.

Nobody has taken the trouble to check who benefits from the “process” described in your article, and it is obvious that ONLY the one who benefits should be penalized. In theory, of course, because as I have mentioned, in a civilized world, in order to penalize, one must clearly indicate the paragraph that has been violated. In this instance this has not been the case (Medical Science Monitor has not raised its Impact Factor since 2008).

In my opinion Thomson-Reuters (if it cares for its own reputation) should reverse the unjustified decision of suspending the Medical Science Monitor from JCR 2011 and re-establish its impact factor without delay. On the other hand, we should also think if we, as scientists, really need Thomson-Reuters manipulative, subjective and expensive reports. We have received and are receiving constantly (and the matter is only two days old!) a string of letters highlighting the personal problems being incurred by young academics as a result of the questioning of their integrity and honesty in their annual academic self-assessment reports as a result of Thomson-Reuters’ decisions. Several of whom have raised the hope that you as the author of such an article and blog may be able to help in resolving this most delicate matter that is threatening their academic careers, jobs and which has motivated some to consider legal action.
So as to ensure that no tragic consequences result from such an extraordinary and unexpected situation, as that which has just occurred, our Committee, as a result of becoming acquainted with the contents of your article, has suggested the retraction of certain articles from the Medical Science Monitor, which were felt might potentially constitute a grey area with regard to professional practice as a result of a newly perceived need for heightened transparency with citation (as raised in your article). The editors of Medical Science Monitor have informed us that such a step was taken (15th of July 2012).
The more important question is what would have happened if you had not written an article from which we have learnt the importance of writing such reviews; i.e. information obtained via a third party and not from clearly stated rules and regulations as to what is acceptable ethical professional academic practice. Which is why in my letter to the Director of Thomson-Reuters Publisher Relations I asked where just such rules and regulations are to be found; and if there is in point of fact an absence of the said then we collectively could cooperate and help Thomson-Reuters to develop just such rules and regulations and with the said an ethical code of practice.
In talking about whether an author is permitted to proceed in such a manner possibly the issue lies in a fresh consideration of different statistical analysis methods to evaluate such reviews. Equally consideration should be given to whether an author has the personal freedom or not to write such reviews.
To sum up, we are of the view that Medical Science Monitor is not only innocent but in fact a victim as it is losing its reputation while really the blame lies with the author whose multi citations appeared in a review; and we hope that its Impact Factor will be reinstated.

BTW: thank you for pointing out the NYC area code which is indeed 212. My typing error!

Prof. Maria M. Pachalska, M.D., Ph.D.
Good Publication Practice Committee

We have received and are receiving constantly (and the matter is only two days old!) a string of letters highlighting the personal problems being incurred by young academics as a result of the questioning of their integrity and honesty in their annual academic self-assessment reports as a result of Thomson-Reuters’ decisions. Several of whom have raised the hope that you as the author of such an article and blog may be able to help in resolving this most delicate matter that is threatening their academic careers, jobs and which has motivated some to consider legal action.

If there is any group that has let you and your colleagues down, questioned your integrity, honesty, and put the trajectory of your academic careers in peril, it is the editors of the journals who authored these review papers and permitted them to be published. Perhaps you should direct your pleas towards them and not the journalist who uncovered the story. As for being delisted from the 2011 Journal Citation Report, this is something that you will need to take up directly with Thompson Reuters. I’m sorry but I cannot serve as your advocate.

It is easy for You, Americans to cancell us like soap bubble! Nobody doesn’t even think about people from East Europe. How is difficult for us to publish in English. There are years of hard science and clinical work for preparing an article. Who on earth decided about it?! We earn 700 USD monthly. And now I don’t know what will happen? I lost twice 1,699 IF! For me it is life or dead! Why You hit innocent authors?!!

Good Publication Practice Committee
66 East 79 Street,
New York, NY 1002
(212) 288-6010; fax (212) 288-6024

New York, 17th July, 2012

Dear Phil,

Thank you for your reply, though I am afraid you misunderstood my message.

Firstly, the first letter I sent only for your attention, to point out that I had written in relation to the matter raised by you to the Director of Thompson-Reuters

Secondly, I thought I would be able to direct my doubts to you regarding the lack of policy on what publication behavior is regarding the criteria adopted in such a review, as I thought that everyone contributing to this blog had the interests of academia at heart, hence the request for a reply as to whether there is published any criteria that is accepted by Thompson Reuters.

This question is, however, still open for Thompson Reuters to answer. There are no published rules regarding a limitation of cited items.

We may discuss if such practices as those of the Florida group of authors should be acceptable or NOT in the future, but that is all: no judgement, no stigmatization, no punishment.

Thirdly, NOBODY is expecting you to be the advocate of the victimized. I was merely informing you as to what the whole situation has led to as a result of the absence of the already infamous lack of rules as to citation quotas.

Fourthly, I will OBVIOUSLY be conducting all further correspondence exclusively with Thompson Reuters.

And for now, I would just like to thank you very much for such an interesting and important article as well as for replying to my letter as well as many thanks for an inspiring exchange of thoughts.

Best regards to you and all our other colleagues on this blog.

Prof. Maria M. Pachalska, M.D., Ph.D.
Good Publication Practice Committee

Comments are closed.