First, let me make a bold admission–the kind that works perfectly for a tweet: “I was wrong.”

In a previous post, I claimed that a precipitous February drop in PLOS ONE article output was the result of a decline in their last Impact Factor. Authors (even those supportive of open access publishing) are sensitive to journal Impact Factors, so a drop in PLOS ONE‘s Impact Factor (from 4.092 to 3.730), reported last June would eventually show up as a drop in publication output 5-6 months later as manuscripts slowly move through their publication process.

My argument was built upon the economic concept of a leading indicator, the idea that future performance is often preceded by early changes in key metrics. As PLOS ONE has already shown that its authors were highly sensitive to its Impact Factor, flooding the journal with submissions shortly after they received their first substantial Impact Factor, I predicted that authors would begin to abandon the journal when that indicator started to turn south.

It looks like I was wrong … but not totally incorrect.

Publication output is still declining in PLOS ONE, but not as fast as one would expect if a single leading indicator were the cause. Publication output hit a high of 3,039 research articles in December 2013. In May 2014, it had dropped to 2,276, a 25% reduction.

PLOS ONE research articles published 01/2014 through 05/2014
PLOS ONE research articles published 01/2007 through 05/2014

While this decline is substantial, it is not as abrupt as what would be expected if their publication flow were predicted entirely by their Impact Factor. A directional shift in publication output signals that there are other structural changes in the marketplace that may be involved, such as:

  1. A decline in U.S.-based funding in science and medicine
  2. A U.S. government shut down in October 2013
  3. Competition from other open access journals in biomedicine
  4. Global changes in national funding in R&D, particularly in Southeast Asia

Readers can comment on other structural changes within and outside of the academy that may reveal themselves in a decline of publication in PLOS ONE. I don’t think anyone can dispute that the drop in publication output in 2014 is merely a statistical blip.

For those who have spent the time doing the calculation, PLOS ONE‘s 2013 Impact Factor (scheduled to be reported mid-June in Thomson Reuters’ Journal Citation Report) is expected to drop again, some predicting as low as 3.1. A further drop in PLOS ONE‘s citation performance may simply accelerate its fall. And without a PLOS ONE editor in chief, there is little that can be done to correct its course. If PLOS ONE were a ship, it is one without a captain, left for the currents to determine its fate.

PLOS ONE creates huge surpluses for PLOS that can be used to subvent its other open access titles, in-house innovation, as well as an advocacy program. Even if PLOS granted a sizable number of article fee waivers, that 25% drop in 2014 publication represents about $1 million less revenue in the first quarter alone. If this trend continues, supporting other PLOS activities may become more difficult.

As one director of a non-profit described to me recently, when you’re swimming in money, it is easy to justify new programs and services. You’re a hero and can do no wrong. No director wants to be in charge when that pool of money starts drying up.

 

 

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

42 Thoughts on "PLOS ONE Output Falls 25 Percent"

Fascinating, Phil. As for the prediction, leading economic indicators are almost always more qualitative than quantitative. Social systems are not that predictable. So in that sense I would say you were right, not wrong. This is just what you predicted, now let’s see if it continues. This may be a good empirical indicator of the impact of the impact factor.

At last week’s SSP Annual Meeting, Jan Velterop quipped that we had reached “peak subscription.” Perhaps we’ve also reached “peak PLOS ONE.” If both are correct, then we get back to the issue of funding overall, which may be a structural component to the lack of adequate funding for scientific report outputs validated by editorial and peer review systems.

However, I suspect with these data that something else is going on. The RCUK data indicate that the specialty OA titles are doing a very good job of competing for papers, with the big publishers dominating the funding lists. The mega-journal has a place, but its place may be moving toward a more natural level in an ecosystem with alternatives, many of which have more prestige and relevance for their intended audiences.

PLOS ONE seems caught in a vicious cycle — impact factor decline + competition = higher quality papers go elsewhere. Rinse and repeat. There is a natural point at which the better edited and better curated OA journals will stop growing as well. Where PLOS ONE stands at the end of this process can be described generally as “smaller and lower impact.” The degree of these changes remains to be seen. As you note, the direction isn’t hard to predict, but the pace is difficult to gauge precisely.

As with many analogies, the idea of “peak subscription” has flash, but thins out on second look. How is scholarly research or the subscription business model similar to a finite resource like oil? Hasn’t peak oil been predicted over and over again? I’m more concerned about “peak attention” or “peak distraction.”

PLOS publishes other journals besides ONE; it would be interesting (in view of a discussion on PLOS income) to see how the total number of papers published in all their journals is trending. It is possible that papers are migrating from PLOS ONE to their other more specialized journals, as they become more popular, which would make the overall trend less bleak.

Beyond lacking an editor, PLOSONE is too big to read. Articles undiscovered will go uncited, thus diluting the impact factor.

I tried to download a paper from PLOS ONE only a few weeks ago, but couldn’t – apparently because of some technical problem. Seems a shame that people should pay to have their work made available if the simple mechanics of that availability can’t be guaranteed.

One would expect that having a journal publish so many articles would increase the number of citations the journal overall gets. At the same time, one might expect that the number of citations per article would drop. How much does the Impact Factor depend more on the former metric than the latter?

In this, as in many ways, PLOS ONE is in unusual territory due to its incredibly fast growth rate. The JIF, of course, is an unweighted ratio of citations (numerator) to size (denominator) but because PLOS ONE was growing so quickly, there are two competing factors at play.

Usually when a journal increases publication volume, it jumps from 4 to 6, or from 6 to 8 issues a year, then stays at that increased frequency for a while.
Say the journal increases published volume in year X, the JIF for that year will be based on how that newly increased volume in year X cites the content in year X-1 and year X-2. More content in year X can mean more journal self-citations in year X JIF numerator; it also means that more content has brought more attention to the title – this can give a slight rise, but that is often hard to tease-out amidst so many things that can affect citation rate.

In year X+1, however, you now have that new content contributing not to citations, but to size. An increase in publication volume means that, in year X+1, the average age of the items in the denominator has decreased. Because an article usually has more citations 2 years after publication than it does one year after publication, this would predictably result in a drop in JIF two years after an increase in volume. Many journals have a bit of a shock that way the year after they increase published volume – even a little bit.

What makes PLOS ONE less predictable is the pattern, year-on-year of increasing volume; not in one year, but year after year so far. it might be that the two effects – increasing volume versus decreasing age – have been obscuring each other, with each successive year’s increase in volume compensating for the prior year’s younger, less-cited articles.

The slower rate of growth between 2012 and 2013 makes this a very interesting year – and now I’m a spectator, so I am waiting like everyone else to see what happens.

The “ship without a Captain” comparison is one I’ve always liked. The importance of a good Editor-In-Chief, or an editor with oversight of a journal, cannot be overstated IMO. They not only keep the trains running (mixing my mode of transportation metaphors now) as far as day-to-day operations, but they have to look towards the horizon to see if the journal/ship needs to shift course. Something that is often missed is the folks in charge also need to look behind them to see if anyone is catching up or doing something differently that may cause their ship to be overtaken.

In terms of peer review, I’ve always wondered how a journal with no primary editor oversight, which accepts 65-70% (?) of submissions, could claim to be conducting “rigorous” peer review. This is not to say the peer review approach taken by PLoS ONE isn’t valid. It may very well be, but it seems that it has become so big it is impossible to really know and it feels like no one has control over it. Too much of anything can be a bad thing, as the saying goes.

It will be interesting to see if the trends Phil describes continue. Perhaps PLoS ONE will adjust and reinvent themselves to address this.

One thing I’d be really interested to see is the same graph for all of the journals that don’t include ‘novelty’ in their editorial decisions. Are researchers increasingly opting for these even though PLoS ONE itself is publishing less?

Don’t they all include novelty as a filter?

I mean, nobody’s going to publish work that’s been published previously, are they?

At least not knowingly.

I’d be more interested in seeing the data for journals that exclude ‘scientific interest’ from their selection criteria. My understanding is that PLoS One includes scientific rigor as their primary screening mechanism. Aside from that, if you’ve collected your data correctly, reached reasonable conclusions, haven’t plagiarized, etc. and can pay the APC, they’ll publish your paper.

Seems like every journal I submit a paper to these days has set up a PLOS ONE-like overflow journal – I’d be interested if the Scholarly Kitchen would write a post comparing these. Anyway, I wonder what the graph would look like if you were to sum up all of the papers published in all journals with PLOS ONE-like peer review (i.e. for technical soundness only). Disclosure, for whatever it’s worth: I’m an academic editor for PLOS ONE.

Another possible explanation here. People all published in plos one when they saw the impact factor. For their next paper, they will try something else (get an idea of other system and because one need more than one journal in his list of publication -this is true even for science/nature!)
With the concurrence of post publication peer review journals, peerJ, and all the other open access journal, it is actually normal that the market share of plos get smaller: they were the first, but the first-mover advantage is not big…
So a natural shift in number of publication that may stabilise soon.

Let’s all hope that the IF will die in the next years.

As I have said before, no one (well, at least no any scientist that I know) cares about impact factor. I have (a small amount) of personal authority on this, in this particular case, bcs I have 5 papers in PLoS ONE. The first one went before their famous “impact factor” surprise and exponential phase. I had considered PLoS to be sort of a dumping ground, but submitted to PLoS Comp. Bio. They said that the paper was not appropriate for Comp Bio (which is weird, since it is clearly a comp bio paper), and suggested sending it to ONE. I think that they were trying to push papers to ONE at that time. Anyway, so whatever, sure. And there is begins. Once PLoS ONE started getting so many papers, it really did look like a dumping ground. Note that we (or anyway, I) care a lot more about how many papers, and how many *I* think are of low quality, than about impact factor. A journal that publishes that many papers is basically a dumping ground. This said, I’m not below dumping if I have something to say and think that I can get it in print rapidly with light peer review, thus the next three papers, which I pretty much figured weren’t going to get into a real journal without a long fight, and I didn’t want to fight about them. So, PLoS ONE was the clear ticket. My last PLoS ONE paper is actually a joke on PLoS ONE. Oh, the work is real enough — the joke is that it’s about what happens to science when there’s light (or no) peer review…so that one I explicitly sent to PLoS ONE as a joke on PLoS ONE. In fact, in the original draft of the paper I used PLoS ONE as the example of a journal with light peer review, but of course the reviewers balked at that, so I changed it to PLoS Currents, and quoted their web page which basically says that they have light peer review. PLoS couldn’t argue with quoting their own site…so there you have it. My point is that there are many considerations that go into where scientists (well, I anyways) send a paper, but none of them are the impact factor of the journal. Impact factor is a waste of time joke that journals are playing on themselves, IMHO.

I wish that this were the case, but it is not. “Impact factor” is the search term most commonly used in association with most journals (just put your favorite journal name into Google and see what the predictive search comes up with), and journals constantly receive queries from authors about a journal’s Impact Factor (amazingly even non-research journals). More anecdotally, I can say from discussions with a great many scientists, academics on editorial boards, and other editors that it is a query that comes up continuously.

So, sadly a great many scientists do care about the Impact Factor. The reasons for this are largely to do with research assessment (both formal and informal) that uses it (inappropriately) as a proxy for article quality. Journals have played their part by reacting to this in the hope of attracting authors and subscribers, creating a vicious cycle, but to say they are the cause and culprit is incorrect. If we are to break the vicious cycle, as initiatives like DORA hope to, we must be honest about the roles of all in this process: scientists, review boards, funders, and journals.

Yes, that probably passes for evidence in PLoS ONE. 🙂 seriously, though, someplace else someone posted a survey that, as I perhaps incorrectly recall, put impact factor down the list quite a bit.

If you look at actual search data, a very common search term is the word “impact factor”. Given the people doing these searches are scientists, this indicates that they do care. I’m not saying that it’s what they care most about, just that this is pretty strong evidence that they do in fact care.

I’m not sure “care” would be any more accurate than “curious” — the result of so much chatter from publishers and librarians. Scientists who actually concern themselves with the impact factor must soon learn it is all about individual papers cited over a relatively short period of time and a poor proxy for the value a journal.

Most of the world still considers “circulation” a better indicator of readership. But circulation figures are not published by many research journals.

That is all very true. But the problem is that it is nevertheless used as a proxy by people who make decisions about their careers. I’m sure the vast majority of scientists understand that it is a deeply flawed metric. However they also know that it is a metric tenure review committees and their peers pay a lot of attention to. See http://am.ascb.org/dora/

Top search terms used to bring readers to The Scholarly Kitchen for the last 365 days:

plos one impact factor
scholarly kitchen
plos one impact factor 2013
plos one impact factor 2012
impact factor plos one
impact factor 2012
plos one impact factor 2014
the power of habit
plosone impact factor
elife impact factor
plos one
plos impact factor
impact factor of plos one
http://scholarlykitchen.sspnet.org/
plos one impact
impact factor 2014
plos one if

This study cites it as the “most important” factor for 20.7% of PLoS authors choosing the journal in 2013, which was the second most common reason next to the more nebulous “quality of journal” (27.7%)

https://peerj.com/articles/365/

Interpreting this is left as an exercise to the reader, etc 😉

First, I’m the first to admit when I’m wrong, and I appear to be (somewhat) wrong. However, your gloss appears to disagree somewhat with the authors’ reading of the data (and mine). The author’s own gloss observes that, after domain relevance “it is clear that impact factor and audience are the more essential qualities” (p.58) and faculty “select journals in which to publish based on characteristics such as topical coverage, readership, and impact factor.” (p. 6). IF seems (from Fig. 33) to be indistinguishable from “The current issues of the journal are circulated widely, and are well read by scholars in your field” which is vague and encompasses pretty much everything else. Moreover, again not to take this too hard to task (esp. as I haven’t read the methodology in detail), asking people whether IF matters to them, esp. in a likert survey like this one, is very different than observing what folks really do, or scoring open-ended questions. Moreover, in these results they did a weird collapse of the 10 point scale into 3 points (pp.11-12), which pretty much makes small differences vanish. All this said, there is much to be interested in in this data. Thanks for pointing it out.

(Oops. This was supposed to be a reply to David’s note below.)

What people actually care about and what they will admit to caring about are two different things. Richard is right, sadly.

The evidence suggests that neither a decline in US-based funding of science and medicine, nor the US-government shutdown, contributed to the fall in PLOS ONE articles published.

There is no discernible decline in the proportion of articles published in PLOS ONE that declare “NIH” as a funding source in their financial information.

In December 2013, 224 of the 3039 articles declared such funding (7.8%)
In May 2014, 192 of the 2287 articles declared such funding (8.4%).

There is no discernible decline in the proportion of articles published in PLOS ONE that declare “United States of America” in the author affiliation field.

In December 2013, 913 of the 3039 articles declared this affiliation (30.0%)
In May 2014, 722 of the 2287 articles declared this affiliation (31.7%).

Richard, thank you for doing this work. Ruling out other possible causes of the decline in PLOSONE output makes their drop in Impact Factor and competition from other journals–which is related to their impact standing–more plausible as likely explanations.

One issue is that *having* an impact factor – used as a binary statement of “passed minimum quality threshold” – is a meaningful bit of information even if we more or less ignore the number itself. (A lot of ranking and reporting is done on impact factor, yes – but a lot is also done asking simply “is the journal ISI-indexed”?) This might explain the sharp uptick in submissions after an impact factor was first assigned, giving a stamp of approval, without automatically implying sensitivity to variation in the IF thereafter.

(As an aside, what would the “baseline” impact factor be for *all* ISI-indexed publishing across disciplines? If <4, it might be that PLoS is simply drifting toward the mean, now that it represents a large chunk of what's out there…)

My gut feeling is that #3 is likely to explain a lot of the drop – "competition" from other journals – and it would really be interesting to know where those other papers are going – are other journals rising to take the slack?

We might well be seeing selection pressure among OA journals based on APCs rather than on impact factor, for example, or people could be switching back to "conventional" journals after experimenting with PLoS.

It would be interesting to see if there is a similar graph for submissions to PLOS ONE, as opposed to numbers of papers published. That would provide a better indication of the impact of impact factor on authors’ decisions. A drop in output for any journal could result from a reduced acceptance rate rather than fewer authors wanting to publish with that journal. Could anyone from PLOS provide submission figures please?

Submissions data would allow testing this this hypothesis: due to the decline of the impact factor of PLOS ONE, the journal has become more selective of the papers it publishes, in an attempt to reverse the impact factor downward trend, resulting in less articles getting published.

You make an excellent point, Sian.

Anecdotally, I know of a few researchers who’ve been surprised by rejection emails from PLOSONE recently, and for reasons that you might expect to get rejected from more traditional journals.

The examples that I’m aware of could be aberrations, so I too would be interested to learn whether PLOSONE’s rejection rate has changed and even some kind of analysis on whether the reasons for rejection are changing.

Phil, remember that PLOSONE has an editorial board of >5000, which means that there would be some trace of a change in policy on their website. PLOSONE did implement a structured reviewer form on December 13th, 2014, which does not appear to change the criteria for publication, see: http://www.plosone.org/static/reviewerForm.action
Authors always look for reasons to explain why their paper was rejected. It’s human nature.

You make a good point, Phil. It’s probably co-incidental that I’ve heard of a couple of rejections that to me seemed outside of the policy. I’m only citing anecdotal evidence, and as you say, PLOS has a lot of editors and a lot of reviewers, so I’m sure that there’s possibly the occasional reviewer who doesn’t fully take the criteria on board and applies traditional standards.

Again, anecdotally, I do hear some scientists saying that PLoS seem to be ‘tightening up’, whatever that means.

So another question for you. How quickly has the PLOS editorial board and reviewer pool grown? Is there a chance that with greater mainstream acceptance, there may be some mission creep as more traditionally minded scientists review and edit for it? I hate to say it but, not everybody reads the instructions when they review a paper. It’s just a thought and I don’t expect it to be contributing to the drop in output unless people are finding PLoS less attractive to submit to as a result.

I believe that PLoS ONE’s rejection rate has remained pretty constant at around 30-35% over the years (although I am certainly not speaking with any kind of authority here).

Thanks Sian – surprising this took 3 days to emerge: PLoS has been criticized all too often for not providing a consistent quality filter and there have been some signs of an effort to send manuscripts to more referees and to provide more guidance for consistent decision making.
While it is somewhat unlikely to be the only explanation for the sharp inflection, in the absence of submission data all other theories presented are idle speculation. Maybe we’ll end up congratulating PLoS ONE on tightening their peer review.

“…but not as fast as one would expect if a single leading indicator were the cause.”
“..it is not as abrupt as what would be expected if their publication flow were predicted entirely by their Impact Factor.”

Phil – can I ask what you base these statements on? (not that I disagree necessarily, just wondering)

Comments are closed.