Drowning...
Image by Charl22 ~ Charlotte Faye Addison via Flickr

Scholarly publishing’s reputation is that it uses peer-review and editorial judgment to separate the wheat from the chaff. This is why “getting published” is such a big deal. The reputation authors garner by being published in a scholarly journal is that he or she has fit through the tight filter on scholarly communications, where only the best of the best gets published.

But that reputation is no longer deserved. Scholarly publishing, under pressure to conform to a “publish or perish” academic culture, an undifferentiated (except in quantity) purchasing universe, and other incentives for more instead of better, is failing.

What we’re dealing with now is not the problem of information overload, because we’re always dealing (and always have been dealing) with information overload. . . . Thinking about information overload isn’t accurately describing the problem; thinking about filter failure is. – Clay Shirky

In many fields, most papers get published in some journal. For the New England Journal of Medicine, a recent analysis showed that 90% of submissions are published somewhere else. Rates from other journals run between 47% and 75%. So, in aggregate — at the system level, not the journal level — the rate of non-publication across all papers is somewhere between 10% and 53%, with most studies showing it to be between 10% and 45%.

Most papers get published. In fact, it’s more likely that your paper will get published than not — if you’re persistent and willing to submit to multiple journals.

Since the majority of papers get published, being published isn’t such a big deal anymore.

Instead, where you get published is the big deal. The journal that publishes you is the signal of quality, right?

Really?

There’s another mechanism that seems broken — the vaunted impact factor. Not only is its algorithm far too simple for a networked world, but a recent example shows the flaw of averages in a real way.

Just last month, PLoS One’s impact factor came in well above expectations, at 4.3, with only a slight amount of self-citation. PLoS One has an acceptance rate of 69% — which puts its acceptance rate in the midst of the aggregate acceptance rate for the journals system as a whole, but quite high for a single journal. And PLoS One publishes a lot of papers, meaning that there are hundreds of authors who submitted to a journal that provides a 7 in 10 chance of being published. Now, each can claim they were published in a journal with an impact factor of 4.3.

Yet in a recent Chronicle of Higher Education article, Bauerlein and collaborators write about how only a minority of articles published are cited within 5 years of publication:

Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further. In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.

So, impact factor may actually be reflective of a minority of the published literature, yet every author gets to claim the aggregate, average impact factor for the journal in which they were published.

Did I mention that this system seems to be broken?

When a pooled resource’s impact factor is higher than dozens of more selective specialty and niche journals that carefully filter their material for specific audiences and have much lower acceptance rates, maybe scholarly publishing is about quantity more than quality.

The more cups of water you pour forth (papers you publish), the better your ability to wick up the impact factor?

It may be that PLoS ONE is ahead of its time, pooling papers in biology and related fields rather than forcing authors to ship them off to other journals. By doing so, it gets a diluted impact factor (about 1/4 of the main PLoS journals’), but even at that level of dilution, it makes waves due to sheer volume, even if a minority of its papers are cited. An average for a field will beat a large percentage of the journals in that field, especially if there’s skew in the distribution — a skew that will occur naturally in a citation dataset and can be driven by active blogging and other promotional means.

Also, authors publish by the demitasse, dividing studies into 2-3 papers and submitting them to different journals. In an interesting discussion of the Chronicle article on Reddit, one contributor states:

The problem is when papers are not cited because other papers already say the same thing better.

It’s hard to imagine that the New England Journal of Medicine or Nature would ever publish 70% of the papers it receives. Their impact factors would fall dramatically — probably only as far as the average impact factor for medical journals, but that’s a sea-change for a top-tier journal. But for a niche journal? Opening the floodgates and pooling more articles could yield an improvement if subjected to current measurement and reputation technologies.

Should we change from a set of struggling specialty journal buckets with mediocre impact factors into a larger pool of information that captures the average? Should we have a swimming pool of papers to make sure that we have a lot of high-scoring articles to drive our impact factor? Or should we carefully boil a few cups of water to create a pure puddle of papers?

From a journal user’s standpoint, the literature is most often viewed as a pooled resource these days — PubMed and Google gather it all together and present it as a list of search results. No longer is there much of a time benefit to be had by searching in branded content silos. That use-case seems to be a brief and fleeting one — a glance at an email table of contents, the cover of a journal as it passes from mailbox across desk toward wastebasket. It’s an anachronism.

Now, one or two searches can generate a swath of results across all sources and provide users with the confidence that they’ve seen most of what exists on a topic.

Because users commonly view the literature as a pooled resource, maintaining separate journal cultures and practices seems a little silly, especially given all the forces routinizing, automating, and normalizing behaviors among journals — from manuscript submissions systems to online publishers to consolidated composition vendors to publishing organizations to disclosure standards to funder mandates.

We’re being “pooled” no matter how you slice it — we’re using the same systems, attending the same conferences, modeling the same behaviors, and perpetuating the same beliefs.

The buyers of journals are increasingly pooling their resources, as well — from consortia to package buys to federations. Because pricing is viewed in pools, large pricing differentials are impossible. After all, Brain Research, a journal offered for $15K a decade ago, can be causally linked to the open access movement, despite the fact that a sober economic analysis showed that it was reasonably priced on a per-use basis. Because quality differentials can’t drive business growth in site licensing, quantity differentials are used. “Big deal” sales, the continuous rebucketing of content, and cynical new launches are responses to a purchasing environment that rewards quantity over quality.

Is this move to pooling information concealing a dangerous undertow for scholarly publishing? Is it a form of “filter failure” itself? Are we fooling ourselves that pooled resources are superior to more distilled resources?

It’s worth noting that even the advocates who state that the majority of studies should be published also state that filters are vital to making such information usable. They just don’t accept that human judgment placed into the hands of a few is where the filters should reside, even though this is how history tends to resolve filters (and filter customization counts as human judgment filtration).

Even Wikipedia evolved from a playground of thousands into the filtration done by a few dozen.

The problem with pooling resources is that pools contain undifferentiated liquids (do we need that dye that turns urine blue?). Some of these liquids can be added without immediate harm, or even filtered out later. But some are corrosive, uncomfortable, or simply unwanted.

We are in the age in which publishing is an expectation — not getting published is an exception on any front. But scholarly communication is supposed to be different — more exacting, a higher standard, peer-review and editorial judgment.

Yet the majority of papers are being published, and the growth rate means that we’ll experience a doubling of output in another 20 years, if not sooner.

And one of the deep questions of our age will continue to be how we filter information.

With financial rewards, philosophical pressures, academic incentives, and potentially false equivalencies driving us toward publishing more and more, filtering less and less, are we already in the midst of a “filter failure” of immense proportions?

Are we abdicating our filtration role at the upstream end, in the pursuit of short-term gains, short-sighted philosophies, and trendy group-think?

Are we deserving of a reputation for quantity instead of quality?

Are scholarly publishers, academic leaders, and information purchasers racing toward a pool in which they will drown?

Who will filter our increasingly brackish waters?

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

28 Thoughts on "Cups, Buckets, Pools, and Puddles: When the Flood of Papers Won’t Abate, Which Do You Choose?"

I am drowning in aqueous metaphors. Could you perhaps summarize you thesis without them and the rhetorical questions? Welcome back.

Well, thanks for diving in!

Ultimately, I’m wondering if our reputation for quality is being undone by practices that reward quantity — author practices, publisher practices, purchasing practices, and technology practices.

Kent,
If I understand your argument, your solution to fix the “broken” and “pooled” system is to return to small, niche journals that focus on rigorous filtration? i.e. return to the “cup” model?

I wrote this to point to a problem we’re facing — reputation as filter but not effective. How this is solved is unknown. But right now, STM publishing can be portrayed as a game of volume, not quality. Differentiating the signal from the noise is something I’m not sure we’re currently capable of.

Kent,
You may be focusing too much on PLoS ONE — their Impact Factor in relation to their rejection rate is an outlier and I tried to explain why, and why it may change to become more “normal” in the near future.

As long as authors, funders and libraries are willing to pay for more articles to be published, I’m not concerned that more manuscripts are allowed through the gate. From a system’s point of view, the function of filtering is to stratify these articles in the journal system — concentrating the best articles in a small number of journals — and thus making it easier for readers to navigate this system.

The fact that most articles remain uncited (or just self-cited) convinces me that the filtering system is doing its job.

PLoS ONE is potentially a more honest approach to what large publishers are doing, which is essentially publishing a high percentage of papers through multiple outlets. It’s worth focusing on because it’s emblematic of the times. Funders, libraries, and authors are paying into the current system ONLY for more articles to be published. As a stratification technique, citation is a poor approach due to confounders like intent and strength of evidence (i.e., a citation may be negative to weak evidence). Prominence isn’t evidence. Stratification, when access is pooled, weakens. Reader research I’ve seen shows that beyond a handful of titles, there is very little differentiation between papers owing to journal brand. So, when authors are serving up papers in demitasses (“salami slicing”), then finding publication in disparate places that are pooled for search and diversified for effective sale (pooled purchasing drives this), how are we supposed to say that being published in a journal is a sign of quality? My argument is that being published in a journal is becoming a sign of quantity.

I’m assuming Nature Communications would be an example of what you’re talking about here Kent, as it seems to be a collecting point for all the papers submitted to Nature and Nature’s name journals that aren’t quite good enough to publish in those outlets.

That’s certainly far along the spectrum, but I think there’s room to think about the incentives that created Nature Medicine, Nature Genetics, Nature Geoscience, Nature Physics, Nature Photonics, etc. Because the system rewards more journals rather than better journals, we get more journals, which equates to more places for papers to find outlets that will lead them to impact factors (whether their paper itself is cited or not), PubMed and NCBI database entries, Google entries, etc., all while being able to claim publication in a journal.

The core question I wanted to ask is about quality (which we purport) and quantity (which we seem to practice).

A very sobering analysis, and an important one.
Kent notes that journal article authors can aim to get eventually published if they opt to submit their papers to multiple journal outlets. He may have advantageously added that the editorial policies of practically all scholarly journals prohibit actual “simultaneous” submission (in fact, with copyright conveyance usually required with submission, not upon publication). If this rings true with most readers, then original research or review articles will become more and more dated as subsequent re-submissions take place, unless the author(s) constantly update their references and material. The end-state would not only increasing mounds of research most of which gets published–but increasing mounds of research in 2nd and 3rd tier journals which is already “old hat.”
Does this sound like fairly reasonable logic?

First of all, a majority is not “most.” Most is something like over 90% so by Kent’s numbers it is not true that most articles get published.

Second it is precisely because different journals have different rank that Kent’s problem probably does not exist. Tiering is an essential part of the filtering process.

Note too that the scientometrics people have all sorts of sophisticated metrics, beyond the simple impact factor. But the ranking that I am referring to is community reputation, not impact factor.

I personally think everything should be published somewhere, because sometimes the breakthrough stuff is unpublishable in established journals. Maybe we need a journal called the Slush Pile.

The idea that breakthroughs are frequently unpublishable because they run so counter to established dogma, stem from those disenfranchised at the edge not the center, and/or remain undiscovered in some (ancient) slush pile is often used in these debates.

But, for the most part, this is just a romantic notion, and the reality is far less exciting. It is very rare, and the competitive nature of journals means that editors are as often accused of fueling the opposite tendency – publishing ‘sexy’ dogma-destroying breakthroughs that do not hold up under subsequent scrutiny.

I concede your point, up to a point. Unpublishable greatness is rare while fad publishing is common.

But I chose the term Slush Pile deliberately. I would like to see everything written available for search, because there is no way to know what is truly worthless to everyone. Put another way, every idea may be useful to someone. Call it a vanity journal if you like and charge a fee. But this really has little to do with the present discussion.

Depending on the field, most papers are being published. Some of these estimates use different techniques to follow the trail (title changes and scrambled authorship and salami-slicing make tracking the papers hard). Overall, the majority (or “most”) are finding publication.

Journals have different ranks, but the trends are for more journals to be published rather than for high-ranking journals to be worth more or to stand out better in search results. Beyond a handful of titles, readers don’t seem to care except to know that it’s in PubMed or another certifying database.

If everything should be published, then why have journals? We should just settle all this now, and say that scientific publishing IS about quantity (trending toward 100%).

Kent/David – I think we are all missing the point that, depending on your publication model, there is a huge difference between most (50-80%) and all papers (100%) being published.

If the trend is indeed to the latter and these are published under an author/funder-pays model, that potentially amounts to a massive increase in the overall cost of publication to the academic community (25%-100%).

When you say “the trends are for more journals to be published rather than for high-ranking journals to be worth more” are you claiming that fewer journals is better? How many is best — one, ten?

I am also curious about your “rather than” as though competition and new journals were a bad thing. I would love to bring out a new journal and knock off a leader.

The journal market is a marketplace of ideas. Surely the number of journals is not something to be specified. If the trend is toward more journals there is probably a good reason for that, one I would rather understand than criticize.

I try to explain why I think it’s happening. I think it’s about incentives. Authors are incentivized to publish as much and as often as they can. Publishers are incentivized to create as many journals as they can. Institutions can expand budgets more easily to buy more journals, but not as easily to pay more for the same journals. I don’t think this is all as noble as “a marketplace of ideas.” I think it’s definitely about a marketplace, though.

I think we’re lapsing into a situation in which journal publishing is becoming more about quantity than quality. There is no specific, a priori number of journals, but the incentives aren’t set up for better journals or papers, just more of them.

One possible outcome – which high-volume, low-bar journals may hasten – is that ultimately a negative feedback loop develops and ‘publish-or-perish’ no longer operates – because the scientific community and, critically, those charged with making appointments decide that ‘publication’ alone is no longer an appropriate measure of an individual’s scientific output.

In this scenario, the ‘where’ (i.e. journal brand) might become even more important; alternatively, a new quantitative measure (e.g. downloads/citations) could take over. Either way, many researchers may end up imposing a pre-filter on themselves, deciding that it is not worth publishing certain results, because they have little to gain from this – so reducing the volume of published information.

For anyone who does not believe this is already widespread, be assured it is – I know many scientists who simply don’t publish work if it is not good enough for a prestige journal and others who only do so if under pressure from a junior author who needs the ‘career points’. Devaluing ‘publication’ has the potential to extend this practice to all scientists.

Many who supported the original E-Biomed idea might welcome such a scenario. As one told me at the time “the problem is the vast number of papers that are not read, not cited and, frankly, no good – we need a place to put this stuff”. That place is currently journals, and it is this that is the cause of the serials crisis, which has the potential to adversely affect science regardless of whether a subscription model (librarians run out of money) or an OA model (author funds run out) operates.

I suspect that dataminers would argue that such self-regulation would be a bad thing and that the data should all be out there to be mined. To this I’d say that we would still have databases (NCBI/GenBank, etc) for such mining and that mining literature is pretty pointless while good labs don’t publish negative results (unlikely to change any time soon).

There are two separate issues here.

The first is the value of peer-review at the system level. I would argue that most papers being published somewhere, eventually, is not symptom of systemic failure. Scientists are individuals that have received an extremely high level of professional training and typically are conducting well-devised research. Moreover, most science is collaborative, meaning a number of people are involved and are in a position to check the work of others. Moreover, as Richard Sever points out, science is very competitive and therefore there is an incentive to not publish true bad work. Beyond all of this, theoretically, the peer-review process is supposed to be an educational one. Peer-reviewers provide comments aimed to improve a paper. Whether authors actually use these comments to improve a paper before submitting to another journal is an open question, but theoretically the paper should improve with each submission. Which is all to say, I do think that a system where between 10% and 45% of all papers are ultimately rejected is a case of filter failure.

As Phil points out above, journals are the filter for this system. The filter failure is (as you mention above) a journal is not necessarily a good indicator of article quality. This is true of well-established highly selective journals where a small number of papers can account for a disproportionately large number of citations. It is even more true for large journals like PLoS ONE. (Though to be fair, PLoS ONE is hardly the first gigantic journal – the JBC, the Astrophysical Journal, PNAS, and others have published large volumes of research, and have received high impact factors, for some time.)

I agree that we have a filter failure, but think it is at the journal level, not the system level. The solution may very well be some kind of article level metrics – though, of course, these are lagging indicators and do not help with filtering and assessing newly published research.

From The Scientist’s coverage of this year’s Impact Factors:

Specifically, the publication with second highest impact factor in the “science” category is Acta Crystallographica – Section A, knocking none other than the New England Journal of Medicine from the runner’s up position. This title’s impact factor rocketed up to 49.926 this year, more than 20-fold higher than last year. A single article published in a 2008 issue of the journal seems to be responsible for the meteoric rise in the Acta Crystallographica – Section A’s impact factor. “A short history of SHELX,” by University of Göttingen crystallographer George Sheldrick, which reviewed the development of the computer system SHELX, has been cited more than 6,600 times, according to ISI. This paper includes a sentence that essentially instructs readers to cite the paper they’re reading — “This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination.” (Note: This may be a good way to boost your citations.)

“Without another, similarly important article in 2010, Acta Crystallographica – Section A is likely to return in 2011 to its prior Journal Impact Factor of between 1.5 and 2.5,” wrote Marie McVeigh, director of Journal Citation Reports and bibliometric policy at Thomson Reuters, in a discussion forum on the company’s website.

Definitely a thought-provoking article. My next question would be, how does the globalizing of STM publishers and the globalization of scientific communities impact this idea?

Instead of depending on smaller localized science journals for publication, scientists have embraced the idea of being able to submit to any journal based in any country via online submission platforms. The major scientific journals are now drinking out of the firehose even more so than previously. Perhaps this is why they are publishing more papers?

Regarding your above comment: http://scholarlykitchen.sspnet.org/2010/07/08/cups-buckets-pools-and-puddles-in-the-age-of-information-abundance-where-do-filters-belong/#comment-16585

This is the first I have heard about budgets for more journals going up. But if they do then that means people want more articles and that is what they should get. If you want a market in which only the n best articles get published then you want rationing, or some other form of control, including artificial “incentives.”

Another consideration regarding quantity is on the supply side. The amount of funding for science has grown exponentially which means the amount of research conducted has grown.

Many more research results to be published, ergo we need more pages to do so.

It is also true (as Megan says) that on-line submission has increased the ability of everyone to submit papers to any journal. Most journals find that when they shift from paper to on-line their submissions rise on average 25%.

What this all means about quality I can’t say. Can we assume that the same percentage of papers are the same quality level now as 20 years ago? If so, there are a larger number of top quality papers.

Good point Pam. I don’t have the figures in front of me, but US Federal funding for basic research, which is most of the basic funding here, has probably doubled in just 10 years or so, and continues to climb. First NIH doubled, now NSF, DOE and NIST are on track to do so.

Have the number of journal pages grown apace? If not then the average quality may well be going up, not down.

On the other hand I am not sure the researcher population has expanded with the funding. They may just be doing more expensive research. As usual the demographics are too complicated for simple generalizations. This is a sizable research question.

I am still a rather young, possibly aspiring :-), scientist (chemist) and I can only say, the people want it like that. I cannot count the complaints from established professors about the failures of the system and yet these are the same people that could change it and would not dare to do it, because when it comes to their own work, of course, everything is different. So please alltogether stop complaining or change it. By the way, the good people do not have to rely on some external filter mechanisms, regarding information overload, since they have their own…some call it “common sense” or thinking by yourself. If something seems to be too good to be true, it clearly isn’t in most cases.

Comments are closed.