Sinking Ship
Does this look like your journal? HMS Bounty replica sinking during hurricane Sandy, 29 October 2012 (public domain image)

If journals are like ships, then the Editor-in-Chief and the scrappy crew of associate editors are ultimately responsible for the leadership and direction of a journal. When the ship runs into trouble–an allegation of plagiarism or fraud in one of its papers, for example–editors are supposed to take control of the bridge, demonstrate integrity, fairness, and ultimately take responsibility for the reputation of the journal. If a journal were like a ship, the Editor-in-Chief and editorial board would not abandon their journal during a catastrophe. They would go down with their ship. This is their duty, and the world expects this behavior.

The equivalent of a shipwreck happens each year as Thomson Reuters suspends dozens of journals from the Journal Citation Report (JCR)–an annual publication that reports the Impact Factor for thousands of titles–for engaging in publication behaviors that distort the citation record. This year, 38 titles were suspended from receiving an Impact Factor: 23 for high levels of self-citation and 15 for “citation stacking” an ambiguous label to what most would consider unambiguously as a citation cartel.

In 2012, I reported on the first instance of a journal citation cartel: 4 biomedical journals with overlapping editorial boards that were publishing selective reviews of each others journal. These “reviews,” often labeled as editorials, delivered hundreds of citations to member journals in the cartel, significantly raising their Impact Factors–and their standing–among other titles in their field. At that time, Thomson Reuters, had no mechanism to detect citation cartels.

This year, most notably, six business journals were suspended from the JCR for citation stacking: Enterprise Information Systems (Taylor & Francis), Management Decision (Emerald Group Publishing), International Journal of Production Economics (Elsevier); International Entrepreneurship and Management Journal (Springer); Systems Research and Behavioral Science (Wiley); and The Service Industries Journal (Taylor & Francis).

The JCR Notices page provides some pretty shocking details on the flow of citations among these six journals. For example, 95% of all citations flowing from The Services Industry Journal to Management Decision in 2013 were focused on the prior two years of publication (2011 and 2012)–the window from which the Impact Factor is calculated. Similarly, 94% of all citations from International Entrepreneurship and Management Journal to Management Decision in 2013 were focused on the past two years. Taken together, these two journals were responsible for 71% of all citations counting towards Management Decision‘s 2013 Impact Factor.

While helpful in understanding why these titles were delisted from the JCR, Thomson Reuters does not reveal the publication strategy that led to their suppression. Did editors engage in publishing their own systematic reviews of each other’s papers? Or, did they coerce authors to do their bidding–a tactic that may be endemic to parts of the business literature. Whatever the cause(s), Thomson Reuters’ response was to delist all six titles for tactics that ultimately distort their evaluation and ranking of journals.

Delisting a journal from the JCR does not necessarily sink a journal, but it clearly does harm to the the profile of the title when reinstated. Cell Transplantation received a 2010 Impact Factor of 6.204 before being suspended in 2011 for citation stacking. It was reinstated in 2012 with an IF score of 4.422, which dropped to 3.578 in 2013. Medical Science Monitor, another title suspended in 2011, dropped from 1.699 in 2010 to 1.358 in 2012, to 1.216 in 2013.

And this is where the shipwreck metaphor fails. Editors rarely go down with their ship. Many simply continue their role after their journal is delisted. Others are quietly removed from the mastheads and asked to join other editorial boards if they are still perceived to hold any clout, or simply retreat to their offices to resume their academic careers of teaching and publishing. The real victims of journal suspension are not the captains, but the passengers–the authors who decided to sail with the journal.

It would be easy to blame the editors, for their willingness and participation in a citation cartel. But before we hold them accountable, we should at least acknowledge that their intentions–to draw attention to their journals–was not entirely bad. Editors are supposed to promote their journals and the articles published therein. How they accomplished this goal, however, is open to debate.

It would also be easy to blame the publishers of these six journals for providing lax oversight of their editors. But before we do so, we should be reminded that editors and authors quickly become incensed when publishers begin micromanaging their content, as in the recent controversy over an article published in the Taylor & Francis journal, Prometheus. Academics can be quick to blame organizations when fault is staring at them in the mirror.

Lastly, it would be tempting to just throw up our hands and blame the system. If you are an author who is evaluated by the journals in which you publish, discovering that you were just published in a delisted journal can be a horrible shock. Why should you be punished for the ineptitude of the captain and his crew?

Delisting a journal from the JCR sends a very strong signal to the community of editors of what Thomson Reuters is willing to do if they perceive that you are gaming their system. For most editors, the benefits are just not worth the risk, especially when your authors suffer so much collateral damage. Other editors, clearly, are willing to take that risk.

Is there a way to target those who are responsible for gaming the system without punishing innocent authors? I’d like to propose a different solution to delisting journals and would like to engage our readers in working through the benefits, drawbacks and unintended consequences of this change. Here it is:

Instead of delisting a journal (for extreme cases of self-citation or citation stacking), Thomson Reuters should flag the journal in its report, highlight the offending article(s), and assign an Impact Factor without the offending article(s).

Would such a solution be preferable than the current model? Or, by fixing the collateral damage problem, does it create a whole new set of unanticipated problems?

 

 

 

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

59 Thoughts on "When a Journal Sinks, Should the Editors Go Down with the Ship?"

I think I see a problem with your suggestion: it’s not a punishment, merely a repair. If the policy that you suggest were introduced, then journals would have no incentive (besides the desire to do The Right Thing) not to try speculative citation-rigging on the off-chance that it goes undetected. There would be no down-side.

There are at least two large downsides. First the IF goes down. Second the journal is publicly shamed which could reduce submissions.

The IF wouldn’t go down any lower than its proper value would have been had the editors never attempted the citation-rigging. (Though the public shaming would have some effect.)

I see your point about the IF. One simple penalty would be to leave the offending articles in the item count but not include their citations in the citation count. This lowers the IF. A penalty formula like this could easily be adjusted to make it bigger. But this penalizes those authors who had no part in the scam, which we are trying to avoid, but not as much as not giving an IF. Not a simple design problem, hence interesting.

One problem with this approach is that it would be a lot of work for TR. They would have to analyze every citable item to decide which ones to remove. Also, each item removed would reflect adversely on its authors. So there is potential for wrongful damage, plus the potential for disputes is endless. On the other hand, this naming of names might be a strong deterrent. Clearly this is not as simple a procedure as suspension. Still it may be a good idea.

To answer the question, absolutely YES. The ship’s captain is the EIC, and the ship’s crew are the editor board members. If the ship cannot sail, then the crew, and captain, are useless, and should be replaced. Quite simple, actually. There is far too much talk of the IF and far too much love for TR. It is time, I believe, to scrap the IF, even as a wave of false or misleading metrics is starting to take shape. The existence of the IF is corrupting science not only because the concept is so simplistically insulting, but because of the way in which it is being gamed and manipulated, by publishers, journals, and scientists, to pseudo-quantify quality. Without the IF, half of the discussion on this page would disappear. In plant science, I can guarantee, from personal experience, that we are dealing with a crisis of editorial responsibility and arrogance. When I complained about editorial and “ethics” discrepancies being advertised and imposed selectively by the senior editors at Elsevier’s Scientia Horticulturae, and made my claims public, I was made persona non grata. When I indicated that the editorial management of the International Society for Horticultural Science is corrupted because they are simply leaving blatantly duplicated papers in the literature of Acta Horticulturae, my claims and proofs, even to as many as 6000 horticultural scientists, is met with dead silence. When, fortunately, I complained publically about the fraud at a Serbian journal, the entire editor board was scrapped, and the journal is now trying to pick up the pieces. Just two days ago, I detected the serious and fraudulent use of the IF and of the names of at least 6 leading agronomists on the editor board of a “predatory” OA publisher, JAA. After complaints to all editor board members as well as some international authorities, 11 out of 16 editor board members disappeared. So, the publishing sea is now currently full of pirates, sharks, and poorly navigated vessels, all vying for the same territorial waters. And it is the little fish (aka the scientists) who are taking the brunt of this maritime war.

JATDs When I read your screed I am reminded of Emerson who said:

the longer he talked of his honor the faster we counted our spoons!

I would suggest that if a journal is delisted it stays delisted. In the olden days if a card cheat was discovered at the table he was shot.

To be involved in these types of activities is to cheat everyone at the table. It subverts the entire system for the benefit of whom – the EIC and the Publisher – they should suffer the consequences of their actions.

Further the journal’s former place should be kept in the JCR with a notice following the Journal’s title stating delisted for X.

Lastly, these types of activities should be looked for during the integrity check of the MS by production.

Harvey, the concern is that this would penalize a lot of innocent authors (the family of the cheat, as it were). How to punish the journal without punishing its authors is the interesting design issue.

I can see your point and it is a valid one.

So how does one penalize the offenders? Say it is the publisher who encouraged such behavior. How does one punish the publisher without punishing the innocent? Say it is the EIC who let things slip by or just one of the authors of a multi-authored article? It seems the same happens.

TR delists the journal if it is discovered that shenanigans have happened. It in essence punishes the guilty with the innocent in doing so. But, after discovery I would hope that others would not want to publish in the journal. Thus, following authorship would not be punished because they would not publish in the offending journal.

On the other hand, if TR were to expose the article and if TR could have an agreement with publishers that if shenanigans are discovered that the article would be publicly retracted and the journal’s impact factor adjusted to reflect the retraction perhaps the punishment would fit the crime.

Do keep in mind that IFs weren’t designed for authors, they intend to help libraries compare journals for quality and decide on subscription decisions. Cheating the IF system muddles this and that’s why offenders are removed: the citation report should for allow an honest comparison.

Phil’s proposed solution actually has three separate parts: 1) flagging of journals, 2) flagging of articles (and thus inevitably the authors by name) and 3) calculating Impact Factor without the offending articles.

Part 3), directly combatting gaming with the algorithm itself, should be uncontroversial and protect the value of the product TR is selling, so frankly I don’t understand why it is not already done routinely. Yes, it will be a never-ending arms race, but TR has the means to stay ahead of the gamers.

Parts 1) and 2) are problematic, because they give even further opaque power to this single company. If the criteria and metrics were absolutely transparent and reproducible, then maybe this could be acceptable. But it often feels TR is mentioned as a force of nature, rather than just another company. Do we want to live at the mercy of TR, however benevolent you may think they are?

But more fundamentally, it would be better to erase the collateral damage problem, rather than just try to work around it:

Why should the Impact Factor of a journal have any bearing whatsoever on the authors of individual papers? When we can easily monitor the number of citations a given paper receives itself, why should the value of this paper (and by extension the prestige of its authors) be affected at all if the average number of citations received by *other* papers in the journal rises or falls?

The root of the collateral damage problem is the wrongful attribution of prestige to individual authors based on the Impact Factor of the journal, regardless of the true impact of their particular work.

The bearing of the IF on authors stems from the presumption that journals with higher IFs are harder to get into so being published by one is a certification of value for the article. The analog is getting into a top school.

Or buying expensive designer-label clothes. In both cases, you’re not getting an objectively better product, you’re just showing off your ability to get the desirable brand. It’s peacock feathers.

Not sure that’s accurate. I tend to find that more expensive clothing is often better made than really really cheap sweatshop stuff, is better constructed, better cut and lasts for decades rather than weeks. A $200 pair of boots will last a lot longer than a $10 pair of boots.

Sure, in clothing there is some correlation — but the $200 boots won’t last you 20 times as long as the $10 pair. Probably more like three times as long. So clothing quality perhaps goes roughly with the cube root of the cost, at least in the middle of the curve. (Right at the bottom my guess is that it’s closer to the square root, and up at the top end it’s essentially flat.)

Bottom line, when someone buys a Prada-brand handbag for $700, they’re not buying the handbag, they’re buying the brand.

Really in my experience the $200 boots last hundreds of times longer than the $10 pair. I have boots that are decades old, that have had soles replaced and keep on going, as well as cheap pairs that fall apart midway through a season.

Buying something solely for the designer label, however, is a different matter, but often when one spends more one does indeed get more.

True, but do the feet in $200 boots automatically belong to a more admirable person than the feet in $10 boots? Even if we measure admirability in the same currency as we measure the boots?

True, but there are reasons for paying for quality beyond just the admiration of observers.

I agree. It is absolutely fine, and sensible, for an author to choose where to submit based on the Impact Factor of the journal.

But given modern information technology capabilities and the quality, scope and resolution of data gathered in products like Web of Science or Scopus, does it not seem idiotic for an employer or funder to choose a nominee based on the *other* papers contained in the same vessel? When they could just as easily base the comparison on direct metrics?

I don’t understand why it’s fine and sensible for an author to choose which journals to use based on IF, but not for administrators to evaluate based on the same criterion. Can you please explain the difference?

Sure Mike. Here’s the logic:

Authors compare journals. Libraries compare journals. Using IF for this task is fine.

Administrator is comparing people. Using IF for this task is not fine.

Get it? The difference between comparing journals, versus comparing people?

A journal with higher IF probably is genuinely a better journal, and reaches more people. It is ok to choose that journal form a set of alternatives, whether you are looking for a place for your paper, or choosing where to spend a limited library budget, or which TOC to browse through to find stuff to read.

In contrast, a person with a paper in a journal where *other* papers in two preceding years have been widely cited is not genuinely more deserving than a person with a paper elsewhere.

“Authors compare journals. Libraries compare journals. Using IF for this task is fine.
Administrator is comparing people. Using IF for this task is not fine.”

Thanks, yes — makes perfect sense.

(I think IF is a terrible way to assess journals, too, but at least it’s a terrible way to assess the right thing.)

Janne, this is what is called an argument by assertion. Why is getting published in a more selective journal not a mark of distinction?

Janne, I think your comment reveals the underlying fallacy in much of the debate on this issue. I do not think anyone is basing funding or employment decisions based solely on the IFs of their candidate’s publication journals. That would be idiotic. But it is entirely reasonable to seriously consider these IFs. Thus people are railing against a practice which probably does not in fact exist. Political movements often do this. It is called demonizing.

I don’t know who has claimed that decisions are based solely on journal names in publication lists. Perhaps a straw man I failed to notice? He probably does not mind being demonized as a demonizer, but I might take a slight offence.

It seems you contradict yourself when you first state that the practice of considering IF to compare authors is entirely reasonable, then say that the practice probably does not exist. I disagree with both of your statements. On what I believe to be rational arguments, not advocacy.

As I say above, I agree with you that as a proxy for rejection rates there is at least some rationale to using IF of journals to compare authors, as a correlate of how many lesser papers the one being measured displaced, but even then it’s silly when a direct measure could easily be obtained.

The practice that I think does not exist is evaluating researchers solely on the basis of the IF of their journals. Here is what I said: “I do not think anyone is basing funding or employment decisions based solely on the IFs of their candidate’s publication journals.”

As for your “easily obtained,” I do not think that getting all the world’s publishers and journals to accurately measure and report their rejection rates is easy. It may be impossible.

Mike:

I would have to disagree. To get designer cloths all you need is money or access to it. To get published in a top tier journal you need to pass muster and contribute. You can be rich or poor and no one cares it is the quality of the work that counts.

David, I think you mention the only defensible point in favor of using IF to judge individual paper or scientist: the assumption that high IF journals are harder to get to, and thus these papers have already prevailed over larger mass of lesser, rejected papers. Ergo, they are expected to prove to be more important contributions than those papers which have displaced fewer rivals.

But if we accept that this exclusion measure is a good metric (it might actually be!), why not measure it directly?

Every journal could easily publish its rejection rate, and journal’s proportion of the global research output can be reasonably estimated by bibliographic databases. The higher these two numbers are, the harder it is to get in, i.e. the higher the number of papers that were inferior (in editor’s judgement) to the ones that have made it through the filter, right?

When we could easily have direct measures of both the a priori, and then later the realized, value of an individual paper, why insist on continuing to measure it with IF?

Janne, I do not agree that “Every journal could easily publish its rejection rate….” You are talking about the entire industry adopting a new accounting system, with all that entails. You might try developing this altmetric, but until someone does the IF is the best proxy we have for this practice, because citations are public data.

Janne: As a young guy I learned to never present a problem without a solution. The importance or lack thereof regarding the IF has been “beaten to death” on these pages. However, no one has presented an alternative that better reflects the prestige of a journal and the concomitant favorable reflection upon authorship. So your solution is?

Harvey, perhaps my comment was TL;DR, but I thought I did present a solution in the last two paragraphs?

If you need it to be worded in more positive angle, perhaps this word-shuffle helps:

“We can easily monitor the number of citations a given paper receives itself. The solution to the collateral damage problem is the rightful attribution of prestige to individual authors based on the true impact of their particular work, regardless of the Impact Factor of the journal.”

Yeah, I am not expecting this to happen soon. Max Planck famously said: “Science advances one funeral at a time.”

For the record, I am not questioning IF itself. It is a good metric when used to measure what it is intended to measure.

Why do you not consider the option of the publisher firing the editor and getting a new one? Isn’t that the most responsible way for a journal publisher to act under these circumstances?

Also, it’s worth to remember that the misuse of the Impact Factor is not done by Thomson Reuters, nor by publishing organisations, nor by libraries.

The sin rests firmly and exclusively on the scientist themselves.

Firmly? Yes. Exclusively? Heck, no! Like I always say when discussing scholarly publishing, there’s plenty of blame to go around. In this case one thinks immediately of university administrators for whom recruitment, promotion and tenure decisions are all about journal brand. (If I remember right, a friend in Germany told me that all publications in journals with an IF of less than 4.0 are simply ignored in his case.)

Sounds like advocacy hear-say to me, Mike. I do not accept that the IF is misused. I have yet to see a clear case of misuse.

“I have yet to see a clear case of misuse.”

I flatly refuse to believe you’ve had your eyes that tightly closed for the last decade. See for example Stephen Curry’s piece Sick of Impact Factors and the many, many useful links therein.

Really, if Impact Factors are not misused, why would their inventor, Eugene Garfield, have “issued repeated warnings that journal impact factors are an inappropriate and misleading measure of individual research, especially if used for tenure and promotion”?

Perhaps I disagree with Gene Garfield, and not for the first time, but your reference is not to Garfield, but rather to Cameron. As for your reference to “many, many links” I do not take what I call the ‘go read all this and get back to me’ gambit when blogging. If there are all these obvious cases of misuse please point one out.

If I remember right, a friend in Germany told me that all publications in journals with an IF of less than 4.0 are simply ignored in his case.)

Sounds like advocacy hear-say to me, Mike

Here you go: my friend is Heinrich Mallison, and he is extensively quoted in this Nature blog. The money quote:

“The museum is evaluated every seven years (or more often). At worst, it can be kicked out of the WGL, a total disaster as it would theoretically leave the museum with no funding at all aside from what the State of Berlin would cough up.

One of the most important criteria for the WGL is excellence in research, which is measured via publications, especially ISI listed peer reviewed publications, and among these especially top-quartile journal publications. Top-quartile means that the journals are in the top 25% for their field. How is that measured? via impact factors (IF) and similar proxies.
[…]
There are many journals in my field that are highly regarded, such as Paleontology, Journal of Vertebrate Paleontology, and so on. However, many journals cover a wider range of topics, and are thus considered to belong to less specific fields, where they have many competitors covering disciplines with many more people working and publishing in them. Thus, the IF of these other journals is boosted by non-vertebrate-paleontology to levels that vertebrate paleontology alone can never manage.”

In short, for Heinrich to publish in the top journals of his field is literally useless for funding purposes.

Here again Mike, you are jumping from the fact that the IF is used, which is quite rational, to the claim that it is all that counts, which is certainly not the case here. Your original claim seems overstated. Also, your claim that broad journals somehow get higher IFs is questionable. By the way, if paleontology journals get very few citations perhaps that is a problem with paleontology, not the impact factor. I can see that low impact specialties might dislike the IF.

perhaps that is a problem with paleontology

Surreal feeling of coming to a place that first seemed rational, only to fall deep into Mythago Wood… in addition to a straw man I think I just saw a giant troll too.

Is there a better way to totally enrage a scientist than questioning the relevancy of his entire field of science, right to his face? Epic. Get your popcorn ready.

I do the science of science and it is an interesting scientific question what it means when a field’s leading journals have a low IF?

I imagine the administrators, mostly deans and department heads, would politely say go away. Look at it this way. The publishers argue, correctly in my view, that the peer review based selection and ranking of articles done by the journal system is one of its greatest values. It follows that where an article gets published is an important measure of that article’s importance.

Plenty of blame and plenty of reasons for admiration. Scholarly publishing is extraordinarily good. It could be better–that’s a given. But it’s not broken. It can be meaningfully enhanced with new products, new services, open access programs, and much more. But anyone who has had a child saved by a recently developed medical intervention knows that the system cannot possibly entirely broken. The moral challenge is to acknowledge its successes without becoming complacent.

For avoidance of doubt …

… Yes, absolutely! I wouldn’t spend so much of my time criticising scholarly publishing if, at core, it wasn’t such a magnificent endeavour. I’d just write it off, and move on. There’s plenty wrong with it, but that wouldn’t even matter if it wasn’t that so much is right.

Mike, who are the people who do university recruitment, promotion and tenure decisions, or the people who decide grants? Scientists. Committees are formed from the senior faculty of the university for smaller decisions, for bigger decisions an independent external committee is formed from… the senior faculty of other universities. Large funding bodies appoint an international grant committee composed of… yes, established and respected scientists.

Science in most democratic countries is largely governed by scientists themselves, with a supposedly meritocratic system. That is a good thing, and meritocracy is a good ideal to strive towards. But it also means we should have a look in the mirror first if something is wrong.

Janne:

You point out a truism.

It is a strange business. The publisher wants a high impact factor because it attracts the best authorship. The EIC sees the same results – better papers are submitted. Better papers are more widely cited and the University wants stars on their faculty in order to attract grant monies and superior young faculty. It is a viscous cycle.

We need to remember that the calculation of the IF is itself manipulated by Thompson Reuters. They negotiate with publishers regarding what to items to include in the denominator. A classic example is the year Current Biology was bought by Cell Press, the IF went up into double figures on the SAME data. So penalising those who game the IF by other means simply reflects that we are happy to have one law for the rich (where you pay to achieve this end and “rise” in society) and one for the poor, where you go to jail.

No thanks.

Much better to go to the root on the problem, which is that we have given up control of our work to corporations (publishing empires). The only solution is for science to be open, because then you can actually judge the quality of a paper. The cover may look good on the shelf, but that is all.

You have made a very serious allegation and I would like for you to back it up. If not, retract.

I am not commenting on what Ferbiglab said, but …

It’s well documented that PLOS Medicine discussed with Thompson Scientific how their initial impact factor was to be calculated in 2005; and that the possible value of that IF, based on the same data, varied wildly:

During the course of our discussions with Thompson Scientific, PLoS Medicine‘s potential impact factor— based on the same articles published in the same year—seesawed between as much as 11 (when only research articles are entered into the denominator) to less than 3 (when almost all article types in the magazine section are included, as Thomson Scientific had initially done—wrongly, we argued, when comparing such article types with comparable ones published by other medical journals).

I suspect there’s a difference though, in being able to have input to make sure you’re getting a fair and accurate ranking, and in the implication above that the “rich” are readily able and welcome to cheat.

I’m not drawing (or even implying) any conclusions here — just tossing data into the pot.

What does seem clear is that the opacity and irreproducibility of Thompson’s magic number is harmful to everyone (except Thompson). If the journals’ IFs were published along with the lists of citations and the considered-citeable articles, we would at least not be arguing in the dark.

For those who can afford it, yes.

I’m not suggesting this is a trivial problem for Thompson to solve, even if they wanted to. Their business model is, after all, based on limited access even to the list of impact factors, let alone the data underlying them.

Really, if we’re to have IFs at all, they should be a calculated in wholly transparent way by a non-profit. As well as the transparency and reproducibility benefits, having the citation graph publicly available would also open up the path for clever people to think of less inadequate metrics.

I have a question: if a jrnl is suspended from JCR, does that mean it will not get indexed in Web of Science?

Suppression from JCR does not automatically result in removal from Web of Science.

Comments are closed.