fireworks
Fireworks (Photo credit: Wikipedia)

Recently, it seems some keepers of the zeitgeist are suggesting that publishers should avoid promoting their impact factors. From the Declaration on Research Asssessment (DORA) to certain voices in academia, various monitors of journal etiquette believe editors, journals, and publishers that promote their impact factors are in some way participating in a ruse or are doing something illegitimate. Some new journals, notably eLife, have publicly pledged not to promote their impact factors.

Such self-imposed restrictions seem akin to putting your head in the sand — a form of avoiding reality, which for journals includes being measured by things like the impact factor, circulation size, editorial reputation, turnaround times, peer review standards, disclosure rules, and more.

For me, journals wanting to promote their impact factors have no reason to apologize and are actually serving the academic community by making their impact factors known and easily obtained. After all, the impact factor is a journal metric. It is not an author metric or an article metric, but a journal metric. It simply states an average number of citations per scholarly article over the past two years. So, if a journal has an impact factor of 10, that means that, on average (with some disputes around the edges over what’s included and what’s counted), articles in that journal received 10 citations.

Just because some in academia can’t stop misusing it doesn’t change the fact that it’s entirely appropriate for a journal to use this measure.

So when Elsevier or the American College of Chest Physicians or Springer or Taylor & Francis or SAGE or Oxford University Press promote their impact factors, they are doing something very appropriate and helpful. And when Google — which has the impact factor calculation at the heart of its PageRank algorithm, thank you very much — promotes journal impact factors as a way of differentiating journals in search listings, they are also adding a helpful and proper signal to their search results.

Interest in impact factor remains intense for authors. Last year, David Crotty documented how posts that touch on the impact factor on this blog routinely receive inordinate and sustained traffic, signifying a persistent interest in the topic. As publishers are essentially providing a service to academia, promoting this useful differentiator is simply part of the service. Authors want to know it, so we make it obvious.

Part of what makes impact factor promotion problematic for some seems to stream from a misunderstanding of what the impact factor connotes — and this misunderstanding fuels other problems I’ll discuss later. At its base, the impact factor is just an average. Like other averages, it’s easy to fall into the trap of thinking that every article received the average number. But a batting average of .250 doesn’t mean that a batter will get a hit every four at-bats. The batter may go 2-3 games without a hit, and then get on a hot streak. An average temperature of 80ºF on June 23rd doesn’t mean that the temperature will be 80ºF annually on June 23rd. It may be 67ºF one year and 83ºF another, with a bunch of seemingly aberrant temperatures in between. Impact factors, since they are averages and not medians, mostly skew heavily toward a few big papers, while the rest trail behind.

Because of this, assigning impact to articles or authors is a misapplication of the impact factor. Yet, it continues to be used inappropriately in some settings, as a journal metric is elided into an academic metric.

For editors, reviewers, authors, and publishers, a higher impact factor is nearly always something to celebrate. While there are illegitimate ways to achieve a higher impact factor — self-citation, citation rings, and denominator manipulations — most impact factors increase thanks to the hard work and careful choices of editors, reviewers, and publishers. In some cases, a higher impact factor is achieved after years of dedicated work, new resources, and careful strategic choices. In short, a higher impact factor — one that increases more than the inherent inflation rate — is typically well-earned.

The impact factor for our flagship journal recently increased more than 30%. This increase is attributable to multiple improvements, including greater editorial selectivity and focus, better brand management, new product development, and stronger social media efforts driving awareness of our content. This is a legitimate increase achieved after years of coordinated editorial, publishing, and marketing efforts. It feels like something to celebrate.

Journals with impact factors that rank well within their disciplines are also the most desirable places to publish. They usually have achieved a virtuous cycle of editorial reputation, important submissions, strong review, careful selection, and consistently high standards for publication. Making it in a journal with these features is a positive sign for a researcher.

Where things diverge from rationality is when the impact factor of the journal is then assigned to the researcher in some way — an average is used as a proxy. This cuts both ways. For authors of a highly cited paper, the impact factor of the journal may under-represent the actual citations for the paper. For authors of a more typical paper, the impact factor will overstate the citations.

The majority of DORA is absolutely correct, including the point above, which DORA states as:

Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.

But when DORA calls for publishers to cease promoting their impact factors as the “ideal” way to solve the problems with academia’s misuse of the metric, that seems an overreach. Publishers who buy into this line of thinking aren’t doing anyone any favors, as their tacit acceptance of a conflated use of the metric only clouds the waters further and seems to confirm DORA’s overreach. If you’re running a journal, you should know and make others aware of your impact factor.

Journal impact factors are useful, but like any tool, they should be used correctly. They represent an average for a journal. They change. They can be trended. There are other measures that can be used to complement or contextualize them. Impact factors are journal-level metrics, not article-level or researcher-level metrics.

So, if you have a good impact factor, promote it. But if you don’t edit or publish a journal, don’t borrow the impact factor and use it to imply something it wasn’t designed to measure.

Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

32 Thoughts on "Exhibition Prohibition — Why Shouldn’t Publishers Celebrate an Improved Impact Factor?"

And how do you propose stopping people misusing something the journal touts as a golden ticket? Perhaps journals should relegate mentioning impact factors to sales brochures and other material that people merely interested in ‘the journal’ but not the articles therein will see. Advertising an impact factor to authors is like telling people to hang around LA because the rich and famous are there – perhaps you too will become rich and famous…

Publishers can’t solve this problem, and proposing that journals stop sharing their (to use an LA-similar metric) Rotten Tomatoes scores isn’t a solution. Academia needs to solve the problem of misappropriation of a journal metric as an academic metric. In that regard, much of DORA made a great deal of sense.

It’s nothing like that at all.
Rather it is like saying: “Hang around LA. The rich and famous are here for a reason.”.

You are suggesting treating the symptoms rather than curing the disease. Asking journals to hide the information authors are seeking will not prevent them from seeking it.

I’m puzzled by your statement that the Impact Factor is at the heart of the PageRank algorithm. Did you mean the eigenfactor(R) (also provided by Thomson-Reuters)? PageRank is quite well documented [1] and while Page and Brin were aware of the IF, PageRank is a fundamentally different metric.

[1] http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf

Eugene Garfield is acknowledged via citation early on in the paper you cite, and the impact factor approach has led to further developments. The impact factor was given a recursive twist in 1976, and treated as an eigenvalue. This led to further development into PageRank. The Eigenfactor per se emerged in 2008 from a paper by other authors. Basically, there have been a lot of people circling the citations issue with different approaches, but to me, all start at Garfield and his initial and continuing insights.

Google’s basic ranking algorithm is based on the same principle as the Impact Factor: citation equals value.

I agree that impact factors are useful journal metrics, but I think they have article level implications as well. Here is the reasoning. If a high impact factor is a matter of careful selection, it would seem to follow that being selected is an important fact about the article, a judgement of merit. This is especially true if a higher impact factor attracts more submissions. In this sense it seems reasonable to relate impact factors to articles (and author’s). The concept is that only the best get in. Being published in a high impact factor journal is perhaps analogous to a letter from a leading researcher, saying one’s work is important.

David I agree with your assessment. However, I do not agree with your concluding sentence. I would have stated the conclusion as: Being published….a letter from a leading researcher, saying others think my work is important.

I agree David, but I think that the argument is that rather than taking someone’s word that an individuals work is important (Journal impact factor)… why don’t you take a look yourself and see if the work actually is important (article metrics)? But I’ll admit even that has limitations discriminating between articles due to the availability (or lack of) open access articles, who will certainly be more visible. In addition, articles in higher impact factor journal may be increasingly cited simply due to the increased traffic that the journal receives.

I agree David, but I think that the argument is that rather than taking someone’s word that an individuals work is important

This is the ideal way to deal with the literature, but only if one has 1) unlimited time and 2) complete knowledge of all fields. The problem is that there are more papers out there than anyone can keep up with, and that sometimes you need to read about research done in areas in which one doesn’t have a lot of background or expertise. We need filtering mechanisms to prioritize our reading and signifiers to help us find better resources in areas with which we are unfamiliar.

While I wouldn’t necessarily use impact factor for either of these purposes, it does play a role in what I would use, which is journal brand and reputation. If I read a paper in Journal X, I have a sense of that journal’s reputation, and how rigorous its review process is. By accepting a paper, I can be sure that several hand-picked experts in the field have verified that the paper is of good enough quality to be published at the level of Journal X. What that level is, and how much I trust the review process is all tied up in the journal’s brand and reputation, which does include the journal’s impact factor.

It shouldn’t absolutely tell you that the article is a good or important article, but it can help you filter the literature and overall hopefully help you find some of the better stuff.

My journal’s rising IF reflects, among MANY other things, my good editorial judgment. So, I guess, an acceptance letter from me is a kind of endorsement. But, I see the details of our citations and know from the “zero cites” how imperfect are my skills! There still is no substitute for promotion committees actually reading the candidate’s articles.

Promotion committees are normally not drawn from the applicant’s department so they are in no position to read and judge the applicant’s articles. The department chairman will surely have input and that will no doubt count much more than the impact factor of the publishing journals. So I think this issue is wildly overblown. It seems to be part of the general social movement that claims that science as a whole needs major reform, which I doubt very much. David Crotty’s recent list of supposed “crises” is a good take on this. It has all the earmarks of a fad.

Impact Factors are an important part of the scholarly publishing landscape. The calculation’s flaws are well documented but the fact of life is that many researchers’ careers (and finances) depend on their ability to get published in an “impact factor journal”. In China for example, many researchers are personally financially rewarded for getting published in a journal with an IF. So don’t blame publishers for promoting their IF nor for wanting to increase their journal’s IF…..they are the single most important metric that researchers use when deciding where to submit their work. Publishers invest, significantly, in complementary (I don’t consider them “alt-“) metrics and in improving our services to authors, but the Impact Factors are here to stay and we should embrace them as the most important part of an author’s submission decision.

The impact factor is important in the tenure and promotion process in the medical school faculty evaluation as it is in hiring of new faculty. The marketing arm of many publishers fill the publishers site with the recent impact factor results. My in box was filled with emails announcing the most recent impact factor results. While it is highly debated, the impact factor is very much alive and well.
Authors still want to be published in journals with high impact factors. Aggregators often pay more for high impact factor journals.

Completely ignoring IF as a journal management metric seems short-sighted. DORA encourages the emphasis be taken off IF and that other metrics be considered along with IF:

“…Greatly reduce emphasis on the journal impact factor as a promotional tool, ideally by ceasing to promote the impact factor or by presenting the metric in the context of a variety of journal-based metrics (e.g., 5-year impact factor, EigenFactor, SCImago, h-index, editorial and publication times, etc.) that provide a richer view of journal performance.”

That seems a reasonable compromise and a useful approach for journal managers.

In medicine we have the rule: Do no harm

In life we have the rule: If it ain’t broke don’t fix it.

It seems to me that if people perceive rightly or wrongly that “politics” is involved the most immediate response is shoot first and aim later. Something I learned to never do in basic training.

Impact factors are very widely used, albeit often indirectly, in assessment of researchers and in selection committees for positions and grants. I have been on many, many selections committees, and even if it is very rare that someone mentions “impact numbers” directly, or quotes any number, articles published in a “prestigious” journal will make a difference. When we have to chose one candidate out of 20, reading all articles is out of the question. But if we see in the publication list that there are some papers published in (for example) Nature, that will count as a merit, and we do rarely check how many times the specific articles have been cited. Never EVER have I in a selection committee heard anyone mention ANY other article metrics, or journal metrics, than number of citations and impact factor. For the individual researcher metrics, the h-factor, which is essential just a derivate of the citation statistics, is also used.

Thus, even if the Impact Factor OUGHT NOT to be used as a metric for individual articles, I really do not see how it could be avoided. From my point of view (active researcher, frequently in selection and evaluation committees), it does not make any difference if a journal publishes their IF or not. We will still know which the “good” journals are.

Indeed, Anders, in other discussions here publishers argue that one of the values added by peer review and selectivity is the ranking of articles. It may even be the greatest value of the journal system. Thus it makes sense for selection and promotion committees to make use of journal rankings in judging researchers.

In fact I am now curious as to why the idea that the IF is being misused is so widely accepted? It has ben frequently stated here in the Kitchen as though it were an established fact. What is the evidence for this? Is it perhaps just a myth, propagated by would be reformers? It has been a long time since I was on a university faculty but I have a hard time imagining the reasoning of a promotion committee being dominated by the impact factor.

Impact factors of scientific journals is the equivalent of circulation for newspapers. The Daily Mirror may sell one million copies and be proud of it, it does not mean it is better than the Times, the Telegraph or the Guardian.

This is a false comparison. I agree that just looking at the number of subscribers to the Daily Mirror doesn’t tell you anything (just as looking at the number of citations to a single journal would be uninformative), but if you look at the number of subscribers to the Mirror vs. the Times/Telegraph/Guardian/etc, then you can get a* qualitative ranking of the newspapers by determining who is willing to pay for them (and thus judges the content to be worth their money). *Of course, what people are willing to pay for may vary (some people want international news, some want sports, etc) so this analysis will have flaws (just as the IF doesn’t determine whether people are citing a paper as a good or bad example of previous work in the field), but you can get information out of it.

No. Comparing Impact Factors across disciplines is yet another misuse of the metric.

CA: A Cancer Journal for Clinicians dwarfs also Cancer Cell. And in the same discipline you have papers for intelligent people and fashionable papers, who will be cited. The evidence of misuse specifically across discipline should be considered as evidence for generalized misuse.

Misuse is misuse.

A Cancer Journal for Clinicians dwarfs also Cancer Cell

Are you suggesting that clinical research and treatment of patients is somehow less important than cell biology?

And in the same discipline you have papers for intelligent people and fashionable papers, who will be cited

If you are a researcher and you are citing papers because they are fashionable, then the problem lies with you, not the publisher.

What you seem to ignore is that there is no scientific discovery in CA: A Cancer Journal for Clinicians, and that it is freely distributed to clinicians.
The problem may not lie in the publisher, but if the Daily Mirror makes its publicity based on the number of readers, I perfectly understand that better newspapers do not.

It’s not clear what you’re trying to argue here. Are you suggesting that basic research is more important than translational research? That bench science is more valuable to society than clinical science?

I am not too sure Mr. Corcos understands what researchers do nor why they cite various papers. For instance if a researcher finds a paper in a journal that has no IF but the paper is relevant to his/her research s/he will cite the paper.

I have cited almost a thousand papers, but, if you say so, they were probably not relevant to my research.

Did you cite them because they were fashionable, as you claim above is the common practice among researchers?

If I were working in a fashionable field, I would. Papers are cited because the field is fashionable.

Now you’re contradicting yourself and veering off into the absurd.

As the moderator on this site, I think this conversation has reached the end of its value.

Comments are closed.