Editors and publishers often feel ashamed when they find themselves beholden to rough measures like the Impact Factor. In the cool-weather months of the year, they can even become defiant — gathered in enclaves and protected from winter’s chill, they gird their courage with drink and boasts of indifference to the measure. But, as winter thaws into summer and a fresh set of numbers blooms, these defiant feelings melt like poolside popsicles, with many of these same rebels turning into puddles of fawning, adoring Impact Factor fans.

I can’t count how many times I’ve heard editors and publishers proclaim that they will no longer be held rapt by silly metrics like the Impact Factor, Eigenfactor, h-index, or immediacy index — only to see them minutes or hours later unselfconsciously strategizing how to improve their Impact Factors by attracting better papers, working more closely with authors, attending more meetings, or eliminating editorial features that weigh down the overall calculation.

beach blanket bingo
Image via Wikipedia.

This occurred once again this year, but in duplicate, as the Impact Factors for 2013 were delayed by over a month, allowing everyone time to get keyed up, reset, and then get keyed up again.

Regret comes later, when temperatures fall, to be followed again by renewed feelings of defiance.

Robust online analytics only exacerbate these feelings, as now editors and publishers possess new metrics and measurements through which they can view editorial success or failure. In fields that naturally reward deconstruction and pushing to the elemental, the traditional rough approximations are somewhat counter-culture, and strike some as inadequate. Detailed online metrics reinforce this perception.

In contrast to new mounds of specific data, the Impact Factor can seem crude — it’s not nearly as detailed, its time-lag is significant, and it’s unclear how applicable it is for any particular author. Attempts to address these apparent inadequacies have led to a number of altmetrics, which oddly seem to always point back to achieving an impact factor proxy. This has become a source of some shame in the altmetrics community. Article-level metrics are another attempt to improve the situation.

But should anybody feel ashamed about relying on rough measures of value and prestige when it comes to scientific and scholarly journals? Perhaps these aggregate metrics are the best we’ll ever have or ever need.

Journals are inherently unpredictable, which may make rough approximations exactly the measures that matter. In this light, the Impact Factor is an approximate metric of value and importance, just as brand and editorial board composition can be.

These approximations may directly reflect the fact that journals are abstract approximations themselves.

There are many things to approximate when setting up a journal — frequency, scope, aim, editorial tone. And these approximations can change as publishers, editors, and audiences shift around. However good the approximation is at the outset, each journal then becomes subject to other approximations — those of authors. They will decide where the journal is in their submission pecking order, whether its aims and scope fit their research output, and so forth. Those who see a reason to submit will do so. Those who do not, will not.

This creates the unpredictable flow of manuscripts, a feature of journal publishing that never goes away or becomes very controllable. Editors can try to influence authors, appeal to them at meetings, make personal contact, or call in favors. But these techniques only go so far. Ultimately, authors provide a second-layer approximation of what a journal will be. They vote with their submissions.

Because journals rely on an uncertain flow of submissions — the volume of submissions is relatively unpredictable, the type of submissions is also unpredictable — journal issues aren’t comparable. I can recall years in which it seemed that all the best papers were negative trials. They were published, but it changed the journal significantly for a year. In other years, all the best papers come from one specific sub-discipline, reflecting either blind chance or funding decisions made years earlier. The fact is that any issue of any journal is unique, consisting of papers the editors cannot recruit again. Blending topics can conceal this unpredictability to some extent, but not entirely.

In essence, every issue of a journal is a rough approximation of a journal concept.

The word “granular” has some relevance here, as we often say, in our modern trendy talk, that we want more granular data, more granular measures, and more granular insights. But it’s easy to lose the forest for the trees — or, to stick with the “granular” theme, it’s easy to lose the beach for the grains of sand. Is that a nice sunny spot above the rising tide? Let’s toss the blanket there. Do I need to understand that there is a higher concentration of calcium in this section of the beach? Do I need to measure the average micron size of the grains here compared to 30 feet down the beach? Or would that be unreasonable?

With online data, we can now measure article performance with a much higher degree of granularity. But once measured, where does that leave the editor and publisher? Usually, nowhere.

Let’s assume that Article A has an incredible number of downloads and social media interactions. Article A is on Topic A. Logically, I’d want Article B on Topic A, to follow up on this success. There are three immediate concerns. First, Topic A is only one of 15 topics this journal needs to cover. Second, there is no good Article B on Topic A in the hopper. Third, by the time Article B on Topic A arrives, is reviewed, edited, and published, Topic A might have become much less interesting, for a variety of reasons.

But what about authors? Shouldn’t they have more granular data? This is where incentives come in. As noted above, most Impact Factors are heavily skewed by a few highly cited papers. For these authors, more granular data would be beneficial. But for the rest of the authors in any fetch of Impact Factor data, more granular data would uncover that their citation rate is below — and sometimes far below — the presumptive rate of any Impact Factor score. As for downloads and views, again, there is risk. I’ve had authors react in many different ways based on their expectations and the actual data. Some are pleased. Others are disappointed. In some cases, articles have received zero views. Zero. In these cases, authors are sorry they asked.

Even with these precise data, where does an author stand? Can there be any predictability based on one paper in a single journal? Some of the authors who have experienced low usage in some cases have had outstanding citation and traffic results in other cases. Not all papers are created equal, and not all topics resonate. Granularity presumes predictability — after all, knowing detailed data suggests you want to manage something next time. But what is there to manage?

We often say we “measure to manage.” But journals aren’t manageable in this way. Making the scientific and scholarly journals marketplace a precise and predictable market would mean adding incredible constraints on authors and editors. Getting granular data and using it meaningfully would trap journals in a mold that is both accidental and limiting. This is the major problem with analytics in scholarly publishing — there’s no recreating any issue or article we ever publish. They are all one-offs. They are each unique.

This ultimately gets to the value of journals as news sources in their respective fields. Echo chambers are easier to manage in a data-driven publishing model, but echo chambers aren’t news sources. News is messy. News is unpredictable. News is, well . . . news. And for this reason, measuring journal articles at a granular level is likely beyond the point of diminishing returns for anything other than directional editorial purposes — commissioning a review article, writing an editorial, and so forth. It also courts many risks that rough measures avoid — for authors, for editors, and for publishers.

Getting more granular than brand, editorial board, or Impact Factor is a fool’s errand, I believe, because the inflow of manuscripts is inherently unpredictable, as is the pool of authors and the range of topics. No editor or publisher can accurately promise a level of citation for any particular paper. This is evident in the skewness of Impact Factors, which are typically driven by a few high-citation papers and a longer tail of less-cited works. However, to predict or promise which papers will end up in the former category is a fraught endeavor at best.

So, while bowing before the approximations of the journals landscape may seem irrational to scientists, it is ultimately a necessary state in a free intellectual marketplace full of unpredictability. We are all approximating. There is no path to useful precision when predictability is not possible and not recommended.

Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

9 Thoughts on "A Day at the Beach — How the Messiness and Unpredictability of Journals Thwart Granularity"

It would be interesting to see a complementary column on how articles and journals can manipulate the data to increase the “impact factor” This might prove interesting with the increasing number of semantic search engines that can read and weigh identified text or as some have noted, the fact that authors now slice their research into finer segments to increase the number of articles per data point, so to speak.

One might wonder, as editors struggle to find quality articles why there is still an increasing proclivity for publishers to continue to launch more journals rather than expansion of extant journals. In the end, with the rise of semantic search engines, the “journal” is just a gateway for an article’s entry into an infinitely expandable database. Perhaps publisher should measure the impact factor of its collective oeuvre?

This may prove more interesting as, with the exception of very focused STM publications, much research is crossing traditional disciplinary lines including overlap between STM and HSS.

Thus, it seems that an examination of these evolving factors such as shifts in focus to how articles are structured based on “impact factors” need more thoughtful analysis?

Two corrections. First, at least one study has found that “salami slicing” of studies doesn’t occur at the rate many believe. I’ll see if I can find the citation today. As I recall, this study showed that “least publishable unit” approaches may in fact be decreasing. Second, existing journals have increased their rate of publication generally.

Can you provide an example of what you mean by “overlap between STM and HSS”?

Before examining these factors, we need to know which way things are actually trending.

“… only to see them minutes or hours later unselfconsciously strategizing how to improve their Impact Factors by attracting better papers, working more closely with authors, attending more meetings, or eliminating editorial features that weigh down the overall calculation.”

Well, hang on. There are perfectly good reasons for editors to want to attract better papers or work more closely with authors. Those behaviours needn’t have anything to do with Impact Factor.

Eliminating editorial features that weigh down the overall calculation, on the other hand …

I’ve been editor of my journal for nine years, and examine cites at the article level every year. So, one might think I’ve gained a few insights as to what works. The humbling fact is, my ability to pick highly-cited articles, as evidenced by my selections for issue highlights, is little better than a monkey. (Perhaps those photographic macaws of a recent thread can help me!)

One thing that does work is organizing featured collections of articles on specific topics: these articles are cited about twice as often as single articles. Collections cover a topic of likely interest and typically invite better known authors. However, I have no clue whether these factors alone completely explain the higher cites.

The bottom line is, raising Impact Factor is a steady slog of keeping in contact with authors and providing author service. There’s no substitute for hard work.

Yes! This is what I tell our editorial boards. Do a better job of managing turn-around time, solicit special issues that are really good, promote the journal to your circles. If you do the hard work, and you have support from either the publisher or society, the better papers will come your way. It is incredibly hard work.

I could not agree more, as I have had the same exact experience as editor of a range of different journals: editors are no good at predicting how cited/downloaded/twitted a paper is going to be.

This really speaks to the eternal struggle between big picture data and granular data. You may not want to know the calcium content of the sand you are sitting on, but you probably want to know if that sunny spot of sand is going to be underwater at high tide or downwind of a paper processing plant.

As we talk with journals and authors, we’re discovering that what stands out to authors is a journal’s discoverability (“How likely is anyone to see my work if I publish in this journal?), impact (“How likely is my work to be cited if I publish in this journal?”), and reputation (“Will my peers respect me for publishing in this journal?”). Looking at these three components of a journal provides more detail than just looking at the Impact Factor, but it doesn’t require doing enough research to write an entire second paper on the distribution of article metrics when deciding where to publish. It also provides a better window through which journals can monitor the prevailing opinions about their journal.

For the reasons valiantly outlined by Kent above, the Impact Factor is a good (and perhaps the best) measure for comparing the scientific value of journals, or comparing one journal’s performance in different years.

For the reasons valiantly outlined by Kent above, the Impact Factor of a journal is nearly useless for comparing the scientific value of individual articles, or for comparing scientists.

The “defiance” arises from the sad fact that too many people practice appalling misuse of Impact Factor and give grants and jobs to applicants who have more papers in higher scoring journals (regardless of the scientific value of the paper, even if it has zero citations in two years and thus in fact *lowers* the Impact Factor of the journal).

Kent seems to be muddling these two very different usages of Impact Factor, and considering he certainly knows this is the very difference that causes the “defiance”, one might be forgiven to suspect he does the muddling on purpose. Because the first use case is easily defended, while the second is utterly indefensible.

But of course Kent knows all this, very well. Surely the readers of Scholarly Kitchen know all this as well, so it is a bit baffling why he chooses to cook with red herrings.

This is not true. The consistency of IF over the years demonstrates that editors do indeed pick highly cited articles *on average.” That’s the key. It’s an average for the articles in a journal. Surely the scientific community does not have to have statistics explained to it by an English major!!!!!!!

Comments are closed.