PLoS ONE logo
Image via Wikipedia

Last week PLoS ONE received its first impact factor — a stunning 4.351.  This puts the open access journal in the top 25th percentile of ISI’s “Biology” category, a group of journals that sports a median impact factor of just 1.370.

Within minutes after the impact factors for 2009 being released, one could almost hear the sounds of champagne corks popping in San Francisco.  Or were they the sounds of competing publishers banging their heads against their office walls in utter amazement?  For there seems something very strange about a journal that accepts nearly 70% of all submissions yet achieves such a score, especially with its first assessment.

For anyone considering that PLoS ONE engaged in editorial shenanigans to boost self-citations, there is no evidence of such.  Self-cites represent just 8% of the citations used to calculate their impact factor, and removing these self-cites drops their score to just 4.00.  We must accept that this is a valid and correct measure of the citation impact of PLoS ONE articles.

So how can a journal that allows 7 out of 10 manuscripts through their gate achieve such a stellar rating?

We need to remember that it costs $1,350 to publish an article in PLoS ONE. And while their article processing charges (APCs) are not as high as competing commercial publishers’, they still exert a barrier to authors. Authors with access to funds (either through their own research grants or through their institution’s open access publishing funds) represent a characteristically different group of authors than the rest of the author population. (I’ll address APC waivers later in this post).

Authors with research funds have already gone through a form of peer-review by their granting agency, which has selected their proposal above all others because it has merit and stands a chance of producing interesting results.  And authors with access to publication funds through their institution have passed through the most finest filter of them all — the peer-review that rewarded them with employment at an elite institution in the first place.

Either way, being willing and able to pay $1,350 to pay to publish an article is a signal that there is something unique about PLoS ONE authors, and it shouldn’t be so surprising that articles written by the 70% who were able to pass through the publication review gate are performing so well.

Is a High Impact Factor a Blessing or a Curse?

Normally, publishers would view a high initial impact factor as a blessing.  After three hard years, scientists have recognized that PLoS ONE is a source of quality articles that are worth both their attention and citations.  More importantly, authors who published early with PLoS ONE did so for reasons other than an impact factor.

For many authors, a journal’s impact factor strongly determines where to submit one’s manuscript.  Authors initially skeptical that a journal working on the principle of light peer-review may now consider PLoS ONE a destination for future manuscripts. PLoS ONE should prepare for the deluge.

While a subscription-access publisher may dream of being overwhelmed by manuscript submissions, this blessing may come as a curse for a publisher that has banked on making money from not spending a lot of time and resources rejecting manuscripts.

In financial terms, the purpose of PLoS ONE is to subsidize the cost of publication for their two flagship journals, PLoS Biology and PLoS Medicine, two highly-selective, high-cost journals that would be unable to continue in their current state without financial subvention.  PLoS ONE is able to do this because of low selectivity — rejecting only 3 out of every 10 manuscripts.  In order to make their parent organization money, therefore, they need to rely heavily on automation and minimal intervention from the publisher.  For PLoS ONE, being a “bulk publishing” journal serves a clear purpose and one that should not be scorned.

Living with an Impact Factor

On the face of it, it would appear that the future of PLoS ONE is rosy: increased submissions leads to increased revenue and more support for flagship journals for which PLoS is chiefly interested in maintaining as highly-selective journals.

This future, however, is based on two assumptions:

  1. The quality of PLoS ONE submissions remains constant, and
  2. No more free riders

As mentioned above, the fact that PLoS ONE now has a respectable — even enviable — impact factor will draw a different kind of author who may not have considered this journal in the past.  If this different author is like the kind of authors that most conventional journals routinely deal with, PLoS ONE may need to be more discerning of which manuscripts are let through their gate; that is, if they care about their future impact factor.  If they do care, then a declining quality of manuscript submissions means that PLoS ONE will need to spend more time and energy reviewing manuscripts only to reject many of them without payment.  This will only drive up the cost of running their operation, or alternatively, dig into the surplus they send back to support the operations of PLoS Biology and PLoS Medicine.

Secondly, and more importantly for PLoS ONE, their financial model requires that the same proportion of authors will pay article processing fees.  But because PLoS has separated editorial decision-making from their business model, they have little sway in keeping payments flowing if authors decide they are either unwilling or unable to pay.  From their Author Guidelines page:

Editors and reviewers have no access to payment information, and hence inability to pay will not influence the decision to publish a paper. These policies ensure that the fee is never a barrier to publication.

How many PLoS ONE authors claim financial hardship?

While I cannot find public information on the percentage of waivers or discounts given to authors who express financial hardship, I was told by a PLoS publisher their system works exceptionally well, with only a small percentage asking for — and receiving without question — article processing waivers.

Now there might be a valid reason for not making waiver statistics public.  With a “no questions asked” policy, the system essentially works on voluntary payment, which is a very unstable kind of market.  If it becomes public that enough authors are unwilling to pay, our moral compass becomes activated and we start to question why we should pay (even a discounted price) when others are given a free ride.  The only other option is to keep waiver information secret and give authors the sense that everyone (or nearly everyone) is paying their fair share.

The second assumption — that the ability and willingness of future authors to pay publication charges remains constant — is based on the notion that future authors will be demographically similar now that PLoS ONE has been given an laudable impact factor. Adding proportionately more non-paying and under-paying authors puts PLoS ONE in a similar situation as raising the acceptance bar would — the cost of running their operation will go up.  This means either raising the price to those still willing to pay or reducing surplus payments to support PLoS flagship journals.  Either solution is gloomy.  The darkest outcome is a total market failure, where price sensitivity starts driving authors out of the payment group and into the free group, creating spiraling costs for the remaining and dwindling few.

In sum, PLoS ONE‘s first and astounding 4.35 impact factor should neither be reason to celebrate nor ridicule, for it puts the publisher in a position that may require them to start treating PLoS ONE like other conventional journals.

This may be time to consider launching PLoS TWO.

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist.


84 Thoughts on "PLoS ONE: Is a High Impact Factor a Blessing or a Curse?"

This misses a couple of factors:

1. If plos one is highly automated (and, to a large extent, run by volunteer editors), then it should be reasonably scalable. Operations costs will not rise linearly with submissions/publications. Plos wouldn’t need to double their staff in reaction to doubled submissions. Even if they do have more lower quality submissions (rejected by unpaid academic editors early in the process), and even if it does get more waiver requests, that may well be offset by the increased per-(paid)manuscript profit margin that will come with increasing volume.

2. Production costs money. Therefore rejected manuscripts do not cost the publisher the same amount as accepted manuscripts.

If these factors aren’t enough to keep Plos in the lifestyle to which they’ve become accustomed, then I’m sure, as a maturing organisation, they already have plenty enough basic business ideas for solving that problem.

The scale question is an interesting one. PLoS has been the beneficiary of a great deal of good will from the scientific community. They’ve been able to keep their costs low, as you note, by recruiting an all-volunteer unpaid set of editors. If others are to take on the PLoS One business model, or if PLoS One is to massively expand its operations, the question of how this is going to scale becomes an interesting one. If the workload increases 10X, are there 10X more volunteers out there willing to pitch in? 100X?

As for point 2, yes, rejected manuscripts cost less than published ones, but they still cost something.

I agree with David, an increase in submissions equals an increase in costs – both time and money. But as Joe says, not linearly – I’d guess it’s more like a semi-log scale. A good inaugural IF score attracts more submissions, but not necessarily more publishable submissions.

It’ll be interesting to see the reaction by institutions that use the Impact Factor as a measuring stick for tenure and other advancements. There are institutions that require a PI to publish X number of papers in journals with an impact factor above a certain level as part of the tenure process. The presumption is that those are highly selective journals, and to get in, the author’s work has to be judged of significance to the field. If a tenure candidate can now meet those requirements by publishing in a journal with a 70% acceptance rate, where the requirement is that the paper be technically accurate, does that invalidate the impact factor as a meaningful measure in this arena? Or do the institutions just scribble in a “but not PLoS One” addendum to their requirements?

“significance to the field” is a Pseudo-proposition. For exmaple, how many papers about p53, NF-kB, etc. have been published in Nature, Science and Cells? How many are not increment among them? Except for minimal incresment, all papers that published in Journals are significant to that scientific field. “rigourously” technical quality assure the quality and significance of a paper to the field!

I don’t agree that “significance” is a pseudo-proposition. Different experiments have different value and different impacts on a field. The first use of green fluorescent protein for imaging in a living organism was more significant than a paper that described one of many later-engineered gfp variants, as one example.

The question really is what measure one uses to assess that significance. Some would argue that mere acceptance into a journal with a high impact factor is a measure of that significance. PLoS is instead experimenting with article level metrics, allowing readers to make those sorts of assessments post-publication. I’m not sure of the accuracy of either approach, but I strongly reject the idea that all papers are created equal.

The first use of green fluorescent protein for imaging in a living organism was more significant than a paper that described one of many later-engineered gfp variants, as one example.

This is exactly what I mean for “significance”.

In fact, The “significance” of many top journals is based on the broadness of readership.

Let’s look at that argument a little more. So, “significance” to a community is meaningless? It should be “significance” in some objective fashion? The fact is that science trends with or against community standards — look at most hypotheses, and you’ll see this. “Significance” in the social aspect of science is just as important as it is in the objective measures of science. A finding that has no relevance to the current state of science is much less significant, especially in the dust-storm of information we have all around us these days.

The argument they use for explaining PLoS One’s success as a function of self-selection based on money makes no sense in the face of the fact that they have the same financial hardship exemptions as other journals. Access to most of the people in the rest of the world who can’t afford access to other journals seems like a more likely explanation.

There is no citation evidence that open access journals benefit from increased access. Frandsen (2009) analyzed the citation patterns of open access journals and found them no different from traditional journals.

“The results of the multivariate linear regression show that open access journals are not characterised by a different composition of authors from the traditional toll access journals. Furthermore, the results show that authors from developing countries do not cite open access more than authors from developed countries.”

Frandsen, T. F. 2009. Attracted to open access journals: a bibliometric author analysis in the field of biology. Journal of Documentation 65: 58-82.

Note that many subscription access journals offer access to developing countries through programs like the WHO’s Hinari efforts.

I wonder if part of the reason for the high impact factor is that the costs associated with publishing in PLoS biases submissions towards those areas which are expensive and fast-moving. For example, there appears to be little plant pathology (an area where even the better journals barely get an impact factor of 1).

This does suggest, though, that PLoS One might start publishing papers from areas with traditionally lower impact factors, simply because it’s a good way to get a high IF paper. This won’t necessarily affect quality.

The major challenge for PLoS1 now will be to continue to resist the tyranny of the Impact Factor.

It is to their credit that, in exploring a different publishing model, PLoS not only avoided the usual ‘impact-factor improvement strategies’ but actively pursued an approach (a high acceptance rate) many would have predicted would reduce this. Moreover, they have touted the use of multiple, alternative measures of impact, which represent another worthy experiment.

There will now be pressure from within and outside the organization to sustain and improve on the impact factor – particularly given how close they have got to the widely assumed quality threshold of 5.0. It would be a shame if they were to compromise a bold experiment by succumbing to this.

“So how can a journal that allows 7 out of 10 manuscripts through their gate achieve such a stellar rating?”

Most authors awared that PLoS One is a high quality Journal. So most of the papers submitted is from the rejected by NSC or other highly selective Journals.

I have published 6 orignial articles since 2007. However, I only submitted two papers to PLoS One, because I think other papers are not qualified for PLoS One. Two papers submitted to PLoS is 100% accepted. The editorial process saved me a lot of time and energy! These two papers reported two “first” in tumor stem cell and tumor vasculogenesis areas, challengig current dogma. These two papers resulted in two invitations of review papers from Journals with IF > 5, and two invitations for book chapter. The first paper received initation a few days after online.

However, the first paper was sufferred a lot before publishing in PLoS One. One year more experiments to satisfy Nature’s editors and killed with the reason that it is not physiological because only used animal models ( I do not know whether all Nature biomedicine papers are invovled in both animal models and human). Then, it spent 4 moths in another cancer joural (was told editors were negotiating with reviewers). The second papaer was directly submitted to PLoS One without rejection: a most economic but higly cited paper (18 within two years; self IF of two papers is > 10).

Now is the third paper reviewing. This paper spend one year to satisfy another top journal editors. Unfortunately, it is rejected. To save time, just go PLos One without trying IF 7-10 journals.

“We need to remember that it costs $1,350 to publish an article in PLoS ONE”.

This is much cheaper than any printed Jourals.

“How many PLoS ONE authors claim financial hardship?”

I claimed twice. If third one is accepted, I am claiming third times, because no big funding support my cutting edge research.

“If it becomes public that enough authors are unwilling to pay, our moral compass becomes activated and we start to question why we should pay (even a discounted price) when others are given a free ride.”

When one’s financial declaration includes big funding such as NIH, no body wants a fee ride

I don’t think they feel too beholden to maintaining their IF. I mean, it’s great that Chinese researchers and others can now consider PLoS ONE for submissions, but they also just started showing off their article-level metrics. The real interesting thing will be to see whether and how the two diverge.

(I’m going to leave the rest of the nonsense in the post untouched. You know what I’m talking about.)

Let’s engage in some of the “nonsense.” What do you think PLoS ONE will do if:

1) They receive more unpublishable manuscripts, and
2) A larger share of authors who are unable or unwilling to pay full price?

This is the point of the post that illustrates some new challenges PLoS will face in the next year.

I think Philip’s comments miss the marks on the main purpose of the PLOS ONE. It seems like the motivation and the purpose of the PLOS One is mainly driven by financial motives. I am somewhat involved in the editorial staff of PLOS ONE. Never once was the issue of financial ever discussed and mentioned in the conversation. The main criteria is the principle of technical soundness and reviewers’ feedback. The fact is that PLOS ONE has published many papers with high citation rate. By these measures, PLOS ONE has served its purpose.

If the financial calculation turns out to be supportive of all the PLOS operation, it is fine with me. But the financial consideration was never discussed or brought up in any conversation. Therefore, it is a bit unfair to frame the success of PLOS ONE from the financial perspective.

In fact, IF of PLos ONE is underestimated compared to Nauture, Science and Cells. I did one calculation. I randomly choosed three months of papers in PLoS ONE and Cells at the same time, and googled and scoupled citation. I found that google citation is higher than scoupe citation for PLoS ONE, but not for Cell. It suggests taht PLoS ONE is more widely accessed by readers around world. IF including only 9,000 journal but there are 25,000 in the world.

At the higher level, the motives are financial: PLoS aims to show that Open Access is financially sustainable (I think of them as a “not for loss” organisation). Presumably when they worked out their funding strategy, finance was their main consideration. As long as they estimated their income and expenditure well enough, I can’t see that they would be bothered. I had also always had the impression that they have high ethical standards, so they won’t interfere in editorial decisions (as Elsevier have, for example).

Yes, as long as the financial consideration does not interfere with the editorial decision, it will be totally fine. We want the organization to be financially sound while maintaining the scientific integrity. The editorial decision was never made based on whether additional $$ can be made by publishing particular articles.

Remember though, that to be successful, “not-for-profit” organizations have to make a profit. If they just break even, there’s no money for new initiatives or for weathering any change in the environment. This is vital for survival, so “not for loss” is not an acceptable strategy.

“So how can a journal that allows 7 out of 10 manuscripts through their gate achieve such a stellar rating?”

The people working with PLoS ONE do spend a lot of time promoting themselves, their articles, and maintaining an active web presence.

For example, if you looked at most journals, you would never know science blogs existed. The PLoS ONE people, on the other hand, actively tracks blogging about their articles and rewards them.

I’m not saying bloggers are making PLoS ONE a success; just that the cumulative effect of that kind of effort should not be underestimated.

This is, to me, an important comment. The social dimension of citation — awareness drives it, as well as quality — is often underappreciated. It may be that PLoS One has made more people aware of itself than comparable journals — partly because of blogging, perhaps partly because of bulk publishing itself, and partly because of the PLoS brand’s effects.

PLoS does have an amazing Apple-esque level of brand loyalty. There are many other open access publishers, not-for-profit publishers, publishing companies formed and run by members of the community, yet none seem to have generated the same level of rabid fanbase as PLoS has. It’s a testament to their brand-building and marketing skills. Also a reminder of why brands are so important in academic publishing.

This is becoming a fairly common marketing practice for publishing companies (the COB enters the fray here as one example). I write a similar promotional blog here. Given the relatively small level of participation in the blogosphere by researchers in most fields, these efforts are more about SEO than about direct promotion.

But it’s important to separate out marketing from actual community interest and response. And that’s one of the reasons why many of the metrics PLoS has chosen to measure are so inherently flawed. Counting the number of times an article is blogged about in a world where publishers constantly blog about their journals’ articles is completely devoid of meaning. People are quick to attack the admittedly problematic Impact Factor, yet compared to metrics like this, it’s a shining beacon of significance.

Hi David. The idea behind the Node (Development’s blog) is actually to let *any* developmental biologist write content for it (and they can). The Node came out of a survey where people indicated they wanted a place to share news.
Of course having the Node in-house does make it convenient for Development to also highlight items from the journal, but that wasn’t the reason to set it up in the first place, and neither was SEO.
Simply put: If nobody had *asked* for a community website, it probably wouldn’t be there.

Regardless of the idea behind it, SEO is a tremendously valuable function that it can provide. I’m sure that at COB you’re well aware of your traffic statistics and how many visitors come from search engines. Any way to help people find the information they’re seeking is worth exploring. There are currently paid staff members from the journal writing blog entries about the journal’s articles, so it’s clearly being used for marketing activities along with other purposes.

It will be interesting to see what the answer to your editorial’s question turns out to be–will anyone participate? I think The Node is likely a better approach than others in that it is focused on a smaller, tightly-connected research community rather than trying to be a forum for everyone in science. I’m not sure if this is granular enough though, will plant developmental researchers really care that much what frog researchers are discussing? One could always build in silos for sub-groups if that turns out to be the case.

Bora actually had his blog at ScienceBlogs long before he joined PLoS. He also has a penchant for heavily promoting *anything* he is involved in (which may have been a factor in hiring him, but it’s likely not the other way around)

Whilst this post contains several valid points, I perceived that it missed the most important one, which I feel needs to be shouted out loud over and over again : It simply demonstrates that the IF is a TOTALLY LUDICROUS FACTOR for evaluating researchers. All I hope is that this result will contribute to getting rid of the IF once and for ever.
I have another more scientific point to raise. Indeed, one needs to remember that PLoS ONE is first and foremost an experiment in progress, and that in publishing papers, the main concern is always their scientific soundness. But as any trained scientist knows only too well, for any experiment to be meaningful, it needs controls. And the more relevant controls are not the old pay-for-access journals, but the many other open access journals that other publishers such as BMC, or Hindawi have produced over the past few years. The fees they charge are comparable to those of PLoS ONE, but their IFs are, on the whole, much lower. One much more interesting questions to ask would thus be why there should be such a difference between PLoS ONE and the papers published by open access journals which are supposedly more selective, at least regarding the subject ? And I personally think it has more to do with the type of scientists that have published in PLoS ONE when it did not have an IF. It is not that they were more wealthy, but that, on the whole, they cared more about their science than about the evaluation of their science by bureaucrats. And here I join Philip Davis in his conclusion that PLoS ONE’s IF is indeed more of a curse than a blessing because people who care more about the IF than about the quality of the science they publish will soon start submitting their papers to PLoS ONE. But I am also confident that solutions will be found to deal with the flood of manuscripts that is probably going to start drifting towards PLoS ONE’s gates.
And, as scientists, we should also not forget one other important thing : for a scientific result to be meaningful, it has to be REPRODUCIBLE. Lets us hope that the ground breaking experiment conducted by PLoS ONE over the past three years will entice more publishers to follow the example of caring more about scientific soundness than about trendiness and immediate impact.

Etienne Joly, member of PLoS ONE academic board.

Maybe the impact factor PLoS One received just tells us that the journal managed to cut out a major flaw in the traditional peer review system. Many good quality papers are rejected with the editorial comment not significant enough, not enough novelty, not enough “impact”. That is clearly a personal opinion of the editor or a reviewer and instead of assuring that only the most significant work gets published, it sometimes serves to prevent competitors in the field from publishing in “high impact” journals. Or maybe the reviwers and editors are just not able to predict the future correctly. We should keep in mind that high rejection rates do not automatically mean that only the best papers are selected. In any case the policy of PLoS One helps us to concentrate our work on science and not on how to find the best way to get into a “high impact journal”

Every time I click over to Scholarly Kitchen I get frustrated by the words the SK authors use to describe the PloS ONE: “light peer-review”, we only check for technical accuracy.

I am the plant biology editor for Plos ONE and I have also reviewed for all the highest impact factor Plant journals. There is basically no difference in the peer review I do for these journals. The completely arbitrary and subjective “significance call” being the only difference. In fact I rarely say anything about significance in the review.

Not only do we check for technical accuracy, we also check that the conclusions that they draw from the experiments are supported, and that the papers meet standards for readability, ethics, availability etc. This means that the papers we publish are of good quality.

I think Barbara’s point above- that perhaps a lot of good research was going unpublished because of arbitrary editorial decisions is spot on.


I take your point. Well put.
But you (nor anyone at PLoS) has addressed the substantive questions posed in my post, specifically:

1) How will PLoS ONE adjust to a predicted increase in submissions, particularly if the quality of these future submissions is below what the editors enjoyed pre-Impact Factor?

2) How will PLoS ONE adjust to a predicted decrease in the percentage of authors paying full article processing fees?

These are the important issues I attempted to raise in the post. If the phrase “light peer review” is a sensitive issue, I will refrain from using it in the future.

As Mr. Gunn said: Nothing. These two “problems” are hypothetical and irrelevant for PLOS and general scientific community. Similarly, it is also probably not relevant to ask other publishers: what you are going to do with the high IF of PLOS ONE since this will drive the paper submission away from your journal to dry up your submission, readership and revenues. I don’t think these are valid questions for other publishers either.

PLOS ONE has certain philosophy and model which proves workable. My guess is that they will keep doing what they have been doing. I really don’t think these issues are of concerns for them. As part of the scientific community, we just focus on advancing our research and the general success of our community.

Let me start by saying that I have no connection to the financial side. So I have no knowledge of how the business model is or isn’t doing and in this case am speaking strictly for myself.

As per #2: I would guess that they could stop automatically granting waivers. There are any number of ways to think about easily imposing criteria for when a waiver request needs to be reviewed. Of course the other alternative is that they could raise the per pub charge.

In regards to #1: I think part of the reply is that I don’t actually believe that there is a large reservoir of unacceptable papers out there that is going to rush in and swamp out the acceptable papers that are also going to come in now that people who care about impact factors (for themselves or for institutional reasons) see us as a viable avenue.

I am sure if we get more papers, we will get more academic editors. I am just highly unconvinced that the ratios will change enough to make a difference to the business model.

In regards to the phrase “light peer review”: I don’t know that the phrase is a sensitive issue as much as the underlying view from SK writers that there is something sub-standard about the peer review that we do. Part of the annoyance that generated my first comment was from the link you put back to a post discussing Declan Butler, who termed our work “low quality”. Elsewhere on SK, the term “primitive” was used.

I think if you spoke to most AEs at PLoS ONE, you would find that we don’t see much separation between the review work we do for PLoS ONE and the work we do for other journals. Which means, from my perspective, that you don’t think I do good, solid peer review. And of course, for me, that is a sensitive topic.

What I am do believe is that our business model will allow us to adjust to the changing situation better than the for-profits, who also rely on a large volunteer force, but then turn around and charge that force for the right to read what they worked on. I think Nature has a lot more to worry about from a UC reviewer and submission strike than we have to worry about a change in the ratio of acceptable to unacceptable submissions.

Ivan, can you suggest a better way of expressing PLoS One’s editorial review policy? Clearly they do take a lighter hand than other journals, leaving out the subjective judgments you mention. But I do understand why the word “light” might be bothersome, as one might assume it is meant as pejorative.

I know I’m not alone as an author on this blog in that when I mention PLoS in any posting, no matter the context, I choose my words very carefully. There’s something of an immediate, knee jerk reaction that happens from some readers, an assumption that any article that isn’t effusive praise is an unfair attack. PLoS is doing some of the boldest and most interesting experimentation in the world of academic publishing, and their efforts deserve serious analysis and critical review. If poorly chosen vocabulary distracts from the actual points being discussed in that analysis, then we should do our best to help our readers focus on the actual message of the posts.

David- I do realize that its a little hard to describe. When I give the pitch to perspective authors/editors/reviewers, it usually takes a couple sentences. I would call our peer review “scientific quality focused” or, somewhat of a mouthful, “peer-review without significance determination”.

Clearly, we have a difference of opinion about the value of the significance part of the review. You see it (I assume) as an important addition to the process, the loss of which detracts from the final result. I think in the majority of cases it’s actually a hindrance that wastes time, slows good science from seeing the light of day and increases the bias against underrepresented groups. From my perspective it might be better to call our process “deconvoluted peer review”

But I do appreciate that your are not necessarily trying to use pejorative terms.

I try to keep an open mind–I’m no longer at the bench, so my needs are likely different from yours. As a publisher (and this is, after all a business blog written by publishing professionals and one librarian for other publishing professionals and librarians), the important thing is to serve the needs of our readers. And from all our market research, and the anecdotal evidence I’ve gathered from the thousands of scientists I work with, journal brand identity is seen as a valuable filtering mechanism. Seeing that a paper has been published in a “glamour” journal like Cell or PLoS Biology signifies something to the reader and helps them narrow down their potentially overwhelming reading lists. Post-publication review is still in an experimental phase and is unproven. Whether scientists are willing to spend time and effort leaving reviews on papers has, at least so far, not been the case. I also have a lot of issues with many of the metrics that PLoS has chosen to use, as some are completely meaningless and the others seem even more open for gaming and rigging than the admittedly flawed Impact Factor. Implementation of better metrics and widespread adoption would certainly change my position though.

I do think that having more filters is better than having less, given the quantity of information out there. I think the Highlander approach (“There can be only one!”) is the wrong way to go. So many of the new ideas in publishing are pitched as zero sum games, X has to replace Y, rather than as helpful additions, X complements Y. Why can’t pre-publication and post-publication review both enhance the reader’s experience? Is there a real difference if a judgment of the significance of a paper is made before publication by peer reviewers rather than after publication? It’s the same judgment, after all. There’s also an efficiency to the pre-publication review system which ensures that all papers have had this sort of inspection by (hopefully) qualified experts, whereas leaving it to the readers (judging by PLoS’ released numbers) means that most papers will receive no such scrutiny. If I’m using reviews as a filtering mechanism, then the time advantage is lost, as I’d likely have to wait a long time for someone to review any given paper, if it ever happens at all.

I believe in a healthy, varied ecosystem. There’s room for great success for many different business models, many different ways of presenting information. PLoS One serves a valuable role in that ecosystem, as it does offer the particular advantages you note.

I think, as a PLoS One AE, that most succinct way of expressing the editorial criteria is that we are focused on experimental rigor and careful interpretation. By completely ignoring supposed “impact” (and asking our reviewers–the same people who would be reviewing them for any other journal–to do the same), maybe we do a better job than some journals of evaluating the actual science.

Declan Butler’s assertion that our strategy is to publish a high volume of low quality papers (I wonder how many PLoS One papers he read and, exercising his scientific expertise, deemed “low quality”) was based partly on not caring what our editorial criteria actually are–you don’t need facts to write a hatchet job–and partly on being immersed in the science publishing culture that, consciously or not, equates individual paper quality with journal impact factor, a demonstrably false position.

Dear All,
I am an author published a paper recently in PLoS ONE. I must tell you that the professionalism shown to me by this journal is beyond imagination and I salute them for that. I also saw a high quality of review system. Moreover, they are compassionate about the publication cost and I am not a kind of author that was mentioned in the article. I just mentioned my condition, and they readily waived my publication cost. Let us all encourage a mission that eventually will be a cometition to science and nature.

Its terribly strange that many PLoS ONE supporters view my post negatively and thus feel compelled to defend the publisher in other ways. This post was about their new Impact Factor and how it may ultimately change the way they do business.

But speaking of your waived article publication fee, how do you think PLoS will adapt if they receive more waiver requests like yours?

See, identifying the right people for waiver is the way to go and they must be having the criteria for that. If they can come out with an innovation in the area, and generated very high quality open access journals, I am sure PLoS will be making intelligent strategies to overcome obstacles and challenges posed to them. Let us all support this great idea of knowledge is nobody’s property and share it with others!!!

This comment thread is taking an interesting and constructive turn. I suggest to call PLoS One peer-review ‘objective’ peer-review, as opposed to the standard ‘subjective’ peer-review.

Also, I agree that pre- and post-publication peer review can easily coexist (and should, even in the subjective form). What researchers need is full-text access in a single place on their desk and not waste time, effort and nerve on single publications as it is now. The current state is absolutely intolerable. As David Crotty points out, expert opinion is highly valued. This will still be the case when we all have full access to what we need to read, probably even more so. Is it so inconceivable to have to have all the different publishers, who now publish journals, to keep doing what they’re doing, only after publication? If David Crotty is correct that so many people value this sort of review, why not have them pay for this sorting? Technically, it’s not problem to just not see anything that hasn’t been approved by this service, so what’s the problem? One big full-text, open-access, peer-reviewed, scholarly database and as many modern filters and sorting technologies we can throw at it. Technically, this is a piece of cake. So why don’t we have it already?

I don’t think “objective/subjective” fits the bill for a variety of reasons:
1) All reviews are going to be subjective, no matter what is actually being reviewed. In the case of PLoS One, if the author uses technique A, and the paper gets sent to reviewer 1 who has always had good accurate results with technique A, his review will differ from that of reviewer 2 who has never quite gotten technique A to work reliably in his lab. Most journals rely on having several peer reviewers for any given paper to cut down on the effects of this sort of subjectivity. PLoS One, because it asks the reviewer to make fewer judgments may indeed be less subjective than other journals who want more opinions voiced, but “less subjective” does not equal “objective”.

2) More importantly, the point of looking for a better term is to find a neutral way to discuss the various types of journals. Replacing a term that one segment of the audience finds demeaning and insulting with a different term that insults and demeans the work of a different segment of the audience is no improvement. Instead of thoughtful analysis of the fascinating experiments being done by companies like PLoS, we instead end up bogged down in Mac vs. PC sorts of mindless flamewars. The idea is to get beyond the sniping.

What about a term like “streamlined review process”, which does connote that it has been stripped down, but that it’s been done mindfully to improve performance? Is “streamlined” pejorative? In terms of aerodynamics or performance, it would be seen as superlative.

As for your suggestions of doing away with journals in favor of one database, as a biologist, I tend to worry about the health of any ecosystem that’s so limited. A varied ecosystem with many different players filling many different niches is much healthier and much better able to deal with upheaval or change, rather than putting all of one’s eggs in one basket. A monopoly marketplace is generally bad for both producers of content and consumers of content. If you’re an author and your paper gets rejected from the one and only place for publication, you have no recourse, no way to get a second chance for something wrongly rejected. In a varied ecosystem, if you get an unfair review from PLoS Medicine, there are other journals where you can take your work. With no competition, the pressure to improve disappears. As an example, we all saw how Internet Explorer stagnated for years after dominating the market and eliminating competitors. It only started to see advancement when serious competition was offered by Firefox and Safari. With no competition, those behind the proposed database could also arbitrarily raise author costs to line their own pockets or to pay off their own vanity projects. Ceding control of the world’s knowledge from a variety of tyrants to just one tyrant does not strike me as a good thing.

The idea assumes that “one size fits all”, that one archive could serve the needs of all researchers. It would have to be a wildly varied archive to cover molecular biology, comparative literature, art history, political science, quantum mechanics and any other subjects where one publishes research papers. Which would lead to an enormous and complicated infrastructure both for management and for content. Which in my experience leads to bureaucracy and massive inefficiencies. There’s also all the evidence to date that shows that the research community is unwilling to participate in the sorts of social rating and filtering activities this database would need. That may change over time, but there are huge hurdles that would need to be overcome, including fear of repercussion and the desire to spend time more effectively advancing one’s own career.

Personally, I’m a believer in the free market, in letting the community decide what best serves its needs. Letting a variety of publishers present a variety offerings and let the authors and readers choose what works best. If the PLoS One model is vastly superior, then it will come to dominate because of its superiority. But if someone else like a big corporate conglomerate publisher can offer the audience a better service at a better price, then they deserve to dominate the market. Each way of doing business should compete fairly on its own merits. By all means, start the sort of database you propose and let’s see what happens. My suspicion is that there are a variety of needs out there and no one solution is going to satisfy everyone.

Finding ‘neutral’ terms will be tricky, as the two are not equivalent. I’m fine with ‘subjective’ and ‘less-subjective’, which seems accurate to me, but begs the question: what is (and is there) ‘objective’ peer-review? New can of worms there…

Good point also on the monopoly problem. Two counter-arguments: 1. Nobody makes that argument about drinking water, not even in the US (I hope!) – and that’s very comparable, on several levels. The current system is like preventing the poor from access to drinking water and all I’m asking for is equal access for all at a decent price.
2. It’s a common misconception that one database means one single place to handle everything. Of course it’s a distributed system and the same 1-2 million papers would still need to get published, which cannot be done with just an dozen people. Of course, if you get wrongly rejected, there would still be just as many other people in the system you could go to for a recourse or appeal as now.
The only way a single database would lead to the effect you described, would occur if the current system is so vastly inefficient, that the same number of papers would be publishable with just a handful of people running the database. Given where this discussion is taking place, I’m rather confident that this is not what you were trying to say?

Finally, I don’t think any scientist really gives a rodent’s behind as to how it all works, as long as they get access to the literature and can publish their papers in a reasonable way. Monopoly, corporate standard, non-profit, whatever! The only reason we’re having this discussion is because the system right now is so pathetically screwed up that people who usually don’t even have enough time to see their own kids get engaged in reforming scholarly publishing at the expense of normally more important things. And I mean that literally. So how you guys figure out how to solve the problem, market or not, isn’t really my concern. What is my concern is that the people who make 4 billion every year in taxpayer money have been asleep at the wheel while they’ve been high on the dough and now our drinking supply is FUBAR. Which means I’m only going to trust this section of the free market, if they show me they’re at least trying to actually do something for the billions they’re making. Their recent track-record does not bolster my confidence, though.

To sum it up: full-text access with modern filter/sorting technology together with reasonable effort for publishing at reasonable cost. That’s the service that science requires and instead of providing this service the current free market is trying hard to oppose it.

“Subjective” and “less-subjective” is still judgmental, stating that one is superior to the other. I’m looking for something without a value judgment (which ironically enough, fits in exactly with the philosophy behind PLoS One’s peer review system). Those terms work for you because you’ve chosen a side in the debate. I want to be able to talk about PLoS One without immediately being accused of hating it, or “jealous” of it (whatever that means) before I’ve even had a chance to make a substantive point.

As for drinking water, there is clearly room on the market for all sorts of different types of beverages. The poor certainly don’t have access to Evian nor Dom Perignon for that matter. There are people who prefer the latter, and different things they provide that are important to their consumers. And for what it’s worth, I certainly have to pay my local water district for the water I use. There may be low-cost or free options available for those in need (something like Hinari) but homeowners and other property owners pay a regular water bill in most municipalities. What constitutes a “decent price” is certainly open for debate. When I lived in drought-stricken areas, my water bills were a lot higher than when I lived in wetter climes.

The proposal called for “One big full-text, open-access, peer-reviewed, scholarly database”, which to me implies just that, one database. You could spread management of that database out in a general sprawl, but again, there are inefficiencies created by not having any sort of centralized oversight. People complain about Apple’s dictatorial control of their app market, but it certainly works a lot better than Google’s more open and disorganized Android marketplace. And speaking of inefficiencies, if every paper that was rejected could then just immediately be resubmitted to someone else, then why bother with peer review at all? If a rejection isn’t likely to stand, why reject anything? Won’t you have a glut of re-review after re-review after re-review of the same rejected papers bogging things down? If the system is meant to be completely distributed and organized around a wide variety of silos with independent editorial boards, then how is that any different from the current system of academic publishing?

And I agree that most scientists don’t really care how the publication world works, or what business model is used, as long as their needs are served. That’s why, despite the very vocal nature of the online advocates for many causes, they remain a relatively small minority. Uptake of open access options remains fairly low in many fields and most researchers aren’t participating in things like social filtering activities. Your needs are clearly not being met, hence your activism and participation in helping explore new directions. But your needs may not be reflective of the needs of every other researcher out there. As a neurobiologist, you have access to funds that can pay author charges–the same might not be true for a historian. That’s why I don’t think one bucket is an approach that will serve well. But that’s the joy of the free market–if you don’t serve your customers’ needs, then you go out of business, regardless of any attempts you might make to the contrary. This market correction may not be happening as quickly as you’d like (remember, you’ve got a century plus of inertia to overcome), or the market just may not share your priorities.

And I do see a lot of experimentation happening in the market though, even from the big corporations. Nature bought into the Web 2.0 concept in a big way, have added commenting on their articles and have started a more free-form journal with some similarities to the new modalities proposed by PLoS. Springer is (according to a recent presentation) the world’s largest open access publisher. Elsevier is trying to reformulate the scientific article. They may not be particularly good experiments (wait, am I allowed to judge significance in my review?), but there is a lot of effort and a lot of money flying around in new directions.

Good and thoughtful points. Please let me address four of them.

I probably should’ve used ‘tap water’ instead of ‘drinking water’. In my native German, it’s less ambiguous, but I remembered the difference when you answered 🙂 No country I know of is considering to let the free market compete for tap water – it is simply too important. For being a state/municipal monopoly, that seems to work quite well, so there is no reason why it shouldn’t be *possible* to get it to work well for scholarly publishing as well. Now, the analogy to mineral waters is interesting. Clearly, Evian doesn’t want everybody to drink their water, hence the price tag. I wonder if authors, when faced with a choice, would want to restrict access to their research to the rich elite. Ironically, the current system provides all the incentives for authors to limit their readership, as being published is more important than being read. Clearly, nobody would want to keep this incentive in place.

‘One database’: again, I should’ve written ‘one *decentralized* database, my apologies. One of the models I’m currently thinking of is to have every library publish the papers of their institutions, according to one or several, field specific standard(s). Not too much of the overall general structure would change, you could still tag papers with journal names if you wanted to, or you could have rating services (maybe called Nature or Science) compete for who picks the most important and significant work for people who like such traditional filtering services, etc. Plenty of room for free market and competition in the premium sector and if people value all these services, nobody would need to lose their jobs as the papers would still need to get published as before.

As to my needs not reflecting the needs of other colleagues: in my personal experience (granted, just an anecdote), people don’t know what the current technical options are, for reasons too many to list here. The system is so entrenched, that people feel liberated, when you tell them what’s wrong and that it doesn’t have to be this way. I draw this experience from the reactions to my talk on scholarly publishing. People are so enthusiastic after these talks and almost every single time someone comes up and invites me to give this talk at their institution/group as well. From this and other interactions with researchers inside and outside my field, I would tentatively conclude that the success of PLoS One is precisely because other researchers share my needs. Probably not all of them 🙂 but a great number.

Finally, as to the current experiments in the corporate publiching environment: you are correct that there are significant innovations going on right now, spearheaded conceptually by forward-looking NPG who seem to have been the first to realize the potential of their business model becoming extinct. However, comparing the rate of innovation there with the innovations we’ve seen from Amazon, Ebay, Google, YouTube, Facebook etc. is laughable and more like a copy-cat than anything truly novel. Moreover, the innovations have been lagging the general market by about 10-15 years – and that in a sector of society that thinks of itself as ‘cutting edge’. To me, when I explain this to non-scientists, it is embarrassing to admit that they enjoy more modern media technology in their spare time than we scientists do in our workplace. In fact, even our students find it hard to believe we still have to use general social networks such as Friendfeed for work!

P.S.: Like all complex systems, market economies can also end up in local minima, where outside intervention is required to get them out of there. Given the largely self-stabilizing scholarly publishing system I’m not sure if one can rule this possibility out just yet.

Bjoern, our commenting system tends to get a little thin and hard to read after a few layers, so starting a new thread here….

Tap water is an interesting analogy–it gets across the idea of a “public good”. At the same time though, it shows the need for a detailed infrastructure, and centralized oversight. Getting water to citizens is not a simple process, and is not one that could easily be crowdsourced. I know that when I’m thirsty, I don’t want to have to go online to see where my colleagues are finding potable water today. I want it to just work, and to work efficiently and reliably.

Evian also makes an interesting analogy–I disagree that they don’t want everyone drinking their water. The ideal situation is that everyone would indeed buy their product. But what they are doing is finding the right balance to maximize their profits. For them, selling a smaller amount at a higher margin is a better balance than selling more water for less money per bottle (see Apple versus Dell for a similar comparison). That same balance is something that publishers must constantly seek. PLoS One is a really interesting case because they do have the “public good” philosophy at heart, yet at the same time, they absolutely need to turn a profit. That profit is necessary for funding PLoS’ less financially successful journals and all the other experiments they’re conducting. What determines a “fair” price for them must be an interesting internal debate, do they maximize profits to drive bold experimentation or do they just ask for the bare minimum to stay afloat?

As far as the database, what you’re proposing sounds more and more like arXiv and less and less like PLoS One. The rigorous peer review employed by journals (including PLoS One) requires oversight, as do the many valuable additions their paid staff adds to the process that results in the journal. Asking librarians to run a peer-review process doesn’t strike me as a great idea. How many librarians have the scientific knowledge or the connections with the community outside of their own institutions to do a thorough job here? Are they also going to become copyeditors, reference checkers, etc? What happens when a paper is published by collaborators at different institutions? Who publishes it? Does this result in multiple versions of the same paper clogging the system? If you just have the libraries make the material available and then have peer review done by the “independent body” you mention in your talk slides, then you’re back in the monopoly problems and you’re likely getting rid of any advantages of having a distributed network–it’s all going to have to funnel through one place in the end.

But if you eliminate the peer review and think instead in terms of a preprint system like arXiv, that might resolve a lot of the issues and still provide much of the benefits you’re seeking to gain. It should be noted though, that arXiv itself is having financial difficulties and that these sorts of archives are complex and need to find ways to sustain themselves.

On the next couple of points, it’s important to look at the numbers for perspective. It’s not surprising to me that a self-selected group of individuals willing to take time out of their day to hear a talk titled “What’s Wrong With Scholarly Publishing” would react positively to the message presented. If you take the number of people at the talk who reacted positively, and then divide that by the total number of researchers at the institution where you gave the talk, that might provide some sense of the percentages. I’m willing to bet there were a lot more people who chose instead to stay at their bench doing their research that day.

And the numbers are also likely to provide your answer for why you’re not seeing Google-like levels of innovation from scholarly publishers. In your talk you note Elsevier as one of (if not the) most profitable scholarly publishing houses with a recent annual profit of $693 million. Compare that number to Google’s profits, which last year were nearly $15 billion. Google can afford to do a lot more experimenting than can Elsevier, and both are in different universes than most scholarly publishing houses. For most of us, publishing is a low-margin business. We can experiment, but we have to be careful and choose our experiments wisely. Think of Google’s success ratio for what they’ve put out–they’re still basically reliant on one thing, search advertising, nearly everything else has been a failure. For a small society publisher, that level of investment in failure is not tenable. I work for a not-for-profit institution. Any profits we turn are put directly into funding research. For us, experimentation requires great justification–every penny we lose building a failed social bookmarking site is a penny that could have gone into cancer research or toward a better understanding of schizophrenia. One can not afford to stagnate, but at the same time, most of us don’t have $15 billion in profits to throw against the wall to see what sticks. Comparing the publishing wing of The RNA Society to one of Fortune’s top 5 companies is kind of unfair.

I would argue that scholarly publishing is indeed at the forefront of experimentation in the publishing field. Journals moved online remarkably early and remarkably successfully compared with other publishing ventures like newspapers, magazines and books. Book publishing is just entering the digital era. Meanwhile, journal publishers are experimenting with new business models, new access models, new social ventures, etc. How much of this do you see from the Wall Street Journal or People Magazine? Wired can’t even figure out that they should be selling their electronic magazines via discounted subscriptions rather than per high priced issue!

In fact, even our students find it hard to believe we still have to use general social networks such as Friendfeed for work!

And this statement is an important one. There have been hundreds of attempts at specialized social networks for scientists. None have caught on. The problem may not be a lack of experimentation, but instead a lack of interest from the community or a lack of any useful functionality that can be provided through specialization. Perhaps “Facebook for Scientists” already exists, and it’s called “Facebook”. Either that or “Facebook for Scientists” doesn’t exist because scientists don’t really need/want Facebook.

Let’s see how far that water analogy goes (all analogies fail sooner or later). The centralized infrastructure required are the common standards (think http, or IP, SMTP etc.) on which you can buld whatever infrastructure you’d like. For instance, one librarian in Regensburg, Bavaria came up with a nifty software to handle all the digital subscriptions and make them searchable for everyone. Now all the libraries in Germany apparently use this nice tool which uses underlying common standards that libraries use. These standards are like the pipes sizes and pressures of the water system. As long as your pipes fit and your faucet can regulate the pressure, it may look any way you want. Likewise, you can create any sort of filter and sorting algorithm to have only the papers important to you come out. Just as a faucet (or shower head), you decide how nice and efficient your scientific information will be delivered, by choosing from different options.

Evian, of course, does not want everybody to *drink* their water. They want everybody to *buy* their water. Big difference.

Obviously, I oppose PLoS One keeping the other community journals alive. PLoS does have enough brand reputation now and I will argue strongly to phase out the other journals in the foreseeable future (not eventually!).

The database I dream of right now runs roughly similar to PLoS One or Frontiers in Neuroscience (for which I also donate time). So of course it’s peer-reviewed as the least common denominator and starting point. The important thing is that the user profiles in this database will accumulate ‘karma’ in terms of articles published, citations, downloads, editor and review work, teaching, what have you. Whatever you can think of might be worth having in a scientist (and can be counted!) could find its way in there. This means, every user of the database can shape their own profile by generating their own unique combination of contributions: data, ideas, infrastructure, etc. each to their own talents. Handily solves the much-discussed current attribution and incentive crisis as well. This database, of course, is beyond arxive or PLoS, FiN, etc. Another nice thing is: it only requires a few computers in all libraries (i.e. distributed infrastructure), the technology is there and is lightweight (i.e. no supercomputers required), and the entire system is flexible and open such that anyone with a great idea can contribute to the system and improve it (and raise their ‘karma’). In this database, of course, it is really irrelevant which of the libraries publishes which papers, as long as on average every library contributes their share, which is easily determined by authorship. I mean it’s obviously not like the paper resides on one single harddrive like a book on a shelf! 🙂 Let me also pre-empt any gaming argument: of course, any system can be gamed, but this one is at least vastly more difficult to game than the current one. Moreover, you could make the rules according to which you can earn ‘karma’ flexible and dynamic, meaning that it is impossible to game (as long as it’s more dynamic than the population using it).

And I think we don’t need to talk about financial problems, given the current library situation. 🙂 The 4 billion in yearly profits of the big three publishers would easily buy the computers and personnel (copyeditors, reference checkers, etc) required and then some.

I do see your point about innovation being money driven and it is well taken. What I was trying to say is that all the mentioned companies were small in the beginning and grew because they had innovative ideas which transformed society in a decade. I didn’t see any innovative ideas and experiments until very recently in scholarly publishing and at least those I am aware of seem to try and copy some functionality that is about a decade old, as if they hope that users will recognize it (just speculating here, of course). I’m not surprised that scholarly publishing is the most innovative branch of publishing (I wouldn’t know!), which also means not all hope is lost 🙂 And of course I also see why many publishers see modern web-based technology more as a threat than an opportunity and that they struggle hard to stay afloat, at least the smaller ones (Elsevier again grew in double digits last year!).

FB4scientists: This is a much-debated topic and you could probably fill book shelves with the online comments 🙂 Last I looked there were about 60 such sites (please ask if you need the reference, can’t find it now). The problem with all of these is that they don’t fit into the working scientists’ workflow, which by and large still looks like it did in the 90s: use the internet to check email and browse/search for literature. I’ll not go into the person-centric or object-centric dichotomy of these approaches, but the social technology needs to evolve around the publishing system and not separate from it. Scientists usually don’t take yet more time out to try something they see little value in. Reading and writing scientific literature is the daily bread of any researcher and thus any added value needs to revolve around that – and not his buddies. An example: what do two scientists do when they meet at a conference? They ask each other about interesting posters/presentations. So this sort of suggestion mechanism needs to be incorporated in meetings and in our literature searches. Then people will use it, because it extends what they’re already doing, just like the abovementioned companies extend what people already were doing. I could go on with a long list of more examples, but I hope this one will suffice for now.

In summary: I can see that there is the technology out there right now to cut out anywhere between 5-10h per week in boring, tedious, manual labor and at the same time provide significant added value on top of that. The only thing keeping us from that is historical baggage. That’s frustrating and every time I have to go over a journal’s table of contents only to find none or maybe one relevant article for me, I get more frustrated, and that’s almost every day. Every time I have to reformat a paper for no good reason upon resubmission I get frustrated. This doesn’t occur every day, but it’s still bad enough 🙂

It’s still unclear to me how you can have a system that’s both distributed and centralized. If you let anyone anywhere do the pre-publication peer review, then it lacks coordination and becomes inconsistent, easily gamed and inefficient (any rejections would immediately be resubmitted to another peer reviewer, clogging the system with hordes of un-publishable papers). If you instead rely on some central body, it provides consistency and efficiency, but you then run into all the monopoly problems detailed above. Who is that central authority? Who calls the shots? Who gets to rule science?

You also risk shutting out researchers at companies that lack librarians, or at small institutions who lack that level of support. Is that $4bn figure really accurate? Most publishers have revenue streams outside of library subscriptions (advertising as one example), products that are not science journals (books as one example) and paying subscribers who are not academic libraries.

I have great reservations about a karma system. I understand why it’s necessary–one of the big problems with current social networks for scientists is that there’s no penalty for freeloading. You can use them to gather information without offering any yourself which, given the time pressures on most scientists, makes more sense than contributing. The problems are many though, the first being that to be meaningful, any system would need to be approved and adopted by every funding agency on earth as well as all hiring and tenure authorities. Otherwise it’s just a meaningless number, a game used to stroke one’s own ego.

Further, such systems are vastly easier to game than something like the Impact Factor. To game the Impact Factor, there’s a significant hurdle one must overcome, as you’d have to publish multiple papers that cite one another in reputable journals. That’s a lot tougher to accomplish than creating multiple accounts in a karma system and giving yourself high ratings, or getting all your friends to do the same. As you note, one could constantly adjust the rules to overcome any cheating, but that asks the same question as above–who is in charge of the rules, who makes those changes? What if I’ve devoted my entire career to doing activity X and now your central authority decides that’s no longer part of my karma score? Is it fair to ask researchers to hit a constantly moving target?

Worst of all, the system you’re describing is based on quantity of work, not quality. If the amount of comments you make on the papers of others is a determining factor (as one example), that’s not a helpful measure for a funding agency. If my charity’s goal is to cure cancer, I want someone who produces valuable scientific results, not someone who spends a lot of time teaching or doing editorial or review work. A measure of quality is not the same thing as a measure of effort or of how nicely one plays with one’s community.

I could continue to pick holes like this for months and months–the point being that this is an extremely complicated undertaking. If you’re taking the position that the current system is broken and needs to be replaced, your replacement has to be less broken than what people already use. Creating anything to fulfill that vision is likely an iterative process, one that is going to take a long period of trial and error to correct these sorts of flaws and to find ways to provide what’s already offered elsewhere. It’s not something that can spring up fully-realized. It’s also unlikely that the research community can be forced into switching to a new system. It’s the sort of thing that would likely start small, and grow and improve over many years. As it does so, it would prove its usefulness to the community and be adopted organically, rather than by coercion. That may not be satisfactory, given your desire to see rapid innovation and change, but I’m not sure how else one would assure buy-in and trust from a thoughtful and skeptical community.

I would like to disabuse you of one misconception though, the idea that publishers fear technology. In my experience, publishers have an irrational love affair with any new technology that comes down the pike. Go to a publishing meeting today and you’ll see more iPads than in a mall full of Apple Stores. A few years back everyone was carrying Kindles. Every publisher out there is working on a smartphone app, or giving social networking a shot (note how many of those annoying FB for scientists came from/are still coming from publishers?), incorporating Drupal into their sites for reader customization, etc., etc. There’s a strong desire to jump on new bandwagons, find new ways to reach customers and create new revenue streams. The reason you may not see much of this from the outside is that much of it is premature, poorly done (see most journal iPhone apps), too expensive to implement to make it worthwhile (ask the medical journal who spent lots of money creating a Kindle version of their journal only to see less than 0.2% of their readership look at it), or just not relevant enough to scholarly publishing to make it work.

Also, I question the idea that social technologies can save you that 5-10 hours a week. How much time do you spend per week on FriendFeed, blogs, Twitter? If a scientist is required to use these tools, how many hours does that add to their workload? Given that every lab is different, and the interests of each person in each lab differ, is it likely that all of your information-seeking needs can be covered by individuals with different interests and priorities? Could you find every paper you want to read by freeloading, and cut out those 5-10 hours? If freeloading is not allowed on your network, then aren’t you going to have to do that same 5-10 hours of scouring the literature (in whatever form) to come up with your contributions? Someone has to read all those terrible papers in order to write bad reviews and warn others away, right? That means you. How many terrible papers are you willing to read every day? Wouldn’t a network like this just add extra work into a scientist’s day? Doesn’t that take away more and more hours from being at the actual bench doing actual research? Isn’t that what publishers are paid to do, the work that scientists don’t want to do so they can be freed to pursue their research? I’ve preached it repeatedly elsewhere, new tools should create new efficiencies, not new timesinks. Asking researchers to take on the work of publishers means piling more on top of their already ridiculous workload.

This is getting very constructive – nice!

The distributed/centralized system comes via federated standards. Granted, there are some technical problems with scale. For instance, it is not trivial getting all PLoS One AEs adhere to the guidelines and not judging subjectively. But this is a matter of training. Notwithstanding, some variability in interpreting these guidelines is probably not a bad idea, which is precisely why AEs sign off each paper with their name. So you have federated standards (for instance for peer-review, but also in terms of format, citations, etc.) which have some wiggle-room (as always) and which could be adjusted if necessary for different fields. It’s really a lot like internet protocols, sans the wiggle-room 🙂 You have a few protocols running email (IMAP, POP3, SMTP), but it doesn’t matter what client you use, email is email.
Did that clear it up? There would be some minimal standards defining what a paper is, what a data contribution is, how comments are handled, etc. There would be a standards body debating changes to these standards as in any other industry. It’s really very similar to what other people are doing in plenty of other places.

Libraries need not be the only places to publish. Any company could still publish papers, as long as they adopt the standards. I only mentioned libraries, because that’s the obvious choice for universities. It’s like email: nobody will force you to use MS Outlook, but you’ll have to use the standard protocols if you want anybody to receive your email 🙂

~4bn is just the annual adjusted profit of the top three publishers. I use it as a ballpark figure for the order of magnitude which would theoretically be savable. I’m aware it’s crude at best and inaccurate at worst, but don’t think it’s easy to get a more accurate figure.

The nice thing with a standardized database is that you can calculate your own ‘karma’: you have access to all the data. For some people citations are more important, others look for a person who is good in infrastructure or reviewing or teaching. The karma you have depends heavily on how it is calculated with federated standards, every new karma calculation is just a few lines of code away (see there’s another opportunity for private companies to make money :-). This is good for your mental health, too: just do what you’re good at and what the position you want requires.

I think it’s clear, that this cannot be gamed. And even if some particular, very popular karma calculations were to be gameable, it’d still be way more difficult than gaming the IF system. Of course, only publishers ‘game’ the IF (see PLoS Medicine and Current Biology stories). Researchers game the system, not the IFs themselves. I personally know 2 colleagues who very successfully do this. One takes out GlamMag editors to fancy dinners and explains to them why they should publish their paper. The other regularly invites a whole group of editorial staff to their institute, shows them around and explains everything to them in great detail, rounds it off with a great dinner and flies them back home. This person does this without any paper in the pipeline, just in case. It works very well for these two people. Old fashioned, I know, but it works. As gaming systems goes, it doesn’t get much simpler than the IF system.

In concur that I do not see a good transition to the system I’m proposing, hence my helpless lamenting of the historical baggage keeping us from implementing it. I also concur that of course it will be an iterative process, but already the first implementation would be vastly superior to what we have (but of course by no means ‘good’ – only less bad 🙂 I’m the first to admit, that I don’t know how a transition would work. In my talks, I present the current situation and ask the audience if they were King-for-a-day and could establish a publishing system from scratch, if they would rebuild the current system.
In my wildest dreams I sometimes see people getting so fed up with the price hikes that they not only threaten boycott… But I’m a realist and don’t believe in revolutions before I participate in them 🙂

Maybe publishers feeling threatened by technology is a misconception! Maybe it takes people who are not in publishing to come up with the innovative idea, to create the revenue streams in ways nobody with experience would think of?

The technology which would save me this much time is information technology – social technology would just be one component of this. I take the time estimate from the amount of time I spend just wading through pages upon pages of stored searches, eToCs, mailinglists, paper alerts, press-releases, etc. Using the information technology available, I could train my filter/sorter algorithm to deliver one list of highly relevant publications each week which I could look through in about 5-10 hours less than what I spend per week now. The social components of this filter are of course not required – but they will be incentivized, much as people today put the list of journals they review for on their resume. Another example: every researcher I know bookmarks the papers they read, intend to read or intend to cite. These lists of papers contribute another social component: it’s like a flag on each paper. It’s the combination of keywords, tags, reviews, ratings and bookmarks that would make this so powerful. Add some trainability to the algorithm (think the suggestion engine of Amazon or citeUlike) and you have something that can be tweaked to perfection – or at least really, really close. And again you can have companies who could provide services in this direction, all that is required are federated standards as any business with some feeling of self-worth already has 🙂

I hope this has helped to explain how the system would integrate into the current workflow and mean less work for everybody involved. The current amount of reviewing, bookmarking, journal clubbing, debating, etc, would probably be a more than sufficient resource to run the filter/sorter I’m thinking of – and the resource is growing by 3% every year. Coincidentally, what you publish being again more important then where you publish would create an incentive to publish less, so people growth would potentially outpace publication growth.
Finally (and I’m repeating myself here), the system I’m thinking of doesn’t preclude paying people to read and review articles. As I wrote earlier, yet another social component would be filter component ‘Science’ competing against filter component ‘Nature’ for customers paying for their filter service. Only after and not before publication.

To sum it up, there’s very little of what currently can be seen as desirable, which would be lost, but one would easily get rid of all that currently frustrates researchers. That’s why I claim that my utopia does exactly what you require: it integrates into the daily routines of scientists and saves them time. It’s a win-win and only historical baggage is keeping us from it. 🙂

You’re just proposing reinventing the wheel. Now, we have a journals ecosystem that allows for poster sessions, pre-publication review through private/personal networks before submission, pre-publication peer-review by volunteer reviewers, and filters with commonly understood reputations — and then the outputs all end up in searchable databases like Google and PubMed and GenBank, etc.

By reinventing the system with only minor modifications at the bauble level (renaming Impact Factor as “karma”), you’re not addressing the deeper problems — publish-or-perish culture, measurement systems that have become lazy substitutes for real thinking, and the search for reputation and career outstripping the search for truth and improvements.

Instead of researchers spending less time thinking about papers, findings, and reputation systems, I’d like them to spend more time. The more automated, systematic, and automatic we make it, the less thoughtful it will become. And thereby, the less useful it will be.

You still don’t get it. I guess it’s my bad explanations.

Here’s how the two systems compare from my perspective:

Your system:
I currently have just under 1000 paper titles accumulated over the last two weeks or so (eToCs, press-releases, eAlerts, etc.) which I need to browse through to find the few (usually less than 20) papers in there I would consider reading the abstract of. Once I have found the one or two I want to read more than the abstract of, I might have to spend another few hours trying to find a colleague who actually has access to it. Only to finally not be able to read the paper after all, because it’s all taken so long that I need to run to give my lecture.

My system:
I get a list of 20-30 relevant papers every Monday of which I will read at least the abstract and for the most relevant ones (those are the ones already at the top of the list) I’ll read the text and will even have time to thoroughly think about their implications for my work or send it our to my collaborators before I head out to the lecture in the afternoon.

Your system:
My most exciting research findings to date have been rejected because they were not of “sufficient general interest”. After about a week getting over the rejection, I spend the next several days reading ‘guidelines to authors’ and comparing IFs and debating with the other co-authors where to send it next. Once we have reached an agreement, I re-write the whole story (mostly stupid copy and paste) and re-format all the figures, to match the requirements of the other journal. No real change in content, obviously. Oh, and yes, of course I have to double-check if the reference manager changed all the references appropriately and that the number of references isn’t too long or short. This time, I even have to print the manuscript, and send a CD with the paper in addition to filling out the three pages of online forms yet again. These procedures repeat for about two years until the paper is finally published by splitting it in two and writing a review to combine the narrative which had to be split to make the components publishable. Very little extra data has been added in the process and none of it changed the outcome or strengthened the argument.

My system:
Submit the manuscript. Get reviewer feedback. Think about the reviewer comments. Discuss them with the co-authors. Do the requested experiments and enjoy that the suggested technique was exactly what you had been trying to do but didn’t know how. Resubmit a much improved version and get published.

Your system:
I check Google scholar for some important method. The list of papers is not sorted by date and contains way too many titles. So I go to WoK trying if that works better, double checking citations and/or other relevant papers going back to Google Scholar. Then I go to Scopus and do the same. Check PubMed if they confirm my search. Get frustrated that after using 4 different search engines, you still have to try and find the author’s website to see if he has deposited the paper there. After not finding it there, I end up asking for it in the FriendFeed “References Wanted” room. When I finally have the paper, I find that it didn’t describe the method I was looking for and start my search over.

My system:
I search the full-text database for my keywords in the methods section to get a list of interesting papers. If necessary, I modify my sort and filter criteria to narrow down the search and reduce the number of titles. For the most interesting ones I go and check the respective methods sections if they describe the method I was looking for. I then send the paper with the exact method to my graduate student for later discussion.

Your system:
I have set up alerts by Google Scholar, Scopus and WoK for papers citing my papers. After I made sure that any particular alert hasn’t been covered by any of the other alerts before, I copy and paste each citation by hand, format it and upload the updated list to my website:
Don’t ask how I had to do it before GS had alerts!

My system:
I link to “my citations” from my site.

P.S.: “Your system” examples all actually happened.

Your system depends on magic. You’re complaining about normal hard work that intellectual engagement requires. As for your publishing experience, you seem to have learned why your papers don’t get accepted initially. Good for you. That’s another pay off of hard work. But your continual use of magical thinking isn’t impressing me. Why don’t you go and make your dream system instead of just badgering everyone about it?

There’s nothing like an example to explain an idea.

You make a good point Kent–there’s some work that can’t be avoided.

Scientists as a group are skeptical by nature and usually fairly egotistical (most think they are smart and have the right approach to answering their research problems). As such, I wonder if a distributed social network would really save much time. I’m betting most would still spend the same amount of time scouring the literature, assuming that there must be some gems out there that everybody else missed. Each researcher is also so specialized that there’s only so much one can draw from the general population. No one else’s research interests are an exact match for your own. You’re still going to have to satisfy your own needs.

Ditto for shared data repositories. Though this varies with different types of data and experiments, I know that if I was publishing something under my name, with my reputation on the line, it’s unlikely that I’m going to trust some stranger’s data. I’m going to have to repeat those experiments in my own hands so I can be assured they were done right.

Systems like these do certainly have value in improving awareness and raising new directions, but given the huge amount of time and effort that participation requires, I’m not convinced that they’re going to be great time savers.

Clearer, but still problematic.

Standards: who sets those standards? Again, who rules science? Does that group have any power to enforce those standards?
Who: It sounds to me like you’re taking a system that is already widely distributed and putting all the power into the hands of one tyrant (or at least a council of tyrants). Who gets to sit on the board that makes those standards? Do you have representatives from every research institution on earth? From every funding agency? From every nation’s government? That sounds like a pretty big council if you want to ensure fair representation.

Enforcement: the standards must be ruthlessly enforced or they’re meaningless. The W3C set standards for years, and Microsoft flagrantly ignored them and instead introduced their own proprietary coding to further cement their monopoly hold on the market. What’s to stop that from happening here? What’s to stop a pharma company from deciding that their papers don’t really need rigorous review to be published in the system? What’s to stop UCSF from doing the same? What’s to stop the entire nation of China from doing the same?

Karma: if you decide the system is variable, and each evaluator can take out just the factors that matter to them, then you destroy the entire point of the system’s existence. Remember, a big reason one has a karma system is to reward participation. Otherwise, there’s no incentive for contributing, and a great advantage to freeloading. If you set up a system where the funding and hiring/tenure people only care about aspects like downloads and citations, then every other part of the karma system is going to be ignored. Scientists are extremely good at figuring out what benefits their career and doing only those things, not wasting time on anything else. All those karma offerings for commenting, reviewing, editing, rating, etc. now become meaningless, and the social aspect of your system falls apart.

Gaming: this system is ridiculously easy to game, by the way. If citations are all that matters, then I put out lots and lots of papers that cite one another. I get all my friends to cite my papers and I cite all of theirs. I get everyone in my department, my university to cite my papers and vice versa. Think of blogrolls. If downloads matter, then I continuously download my own paper, I have all my friends do so, I have everyone in my department do so, I have everyone at my university do so. I set up proxies elsewhere running scripts that download my paper. If a star rating system is what matters, then I repeatedly give myself 5 stars, have all my friends do so, everyone in my department do so, etc. The problem with social networks, and social reward systems is that they reward one’s ability to network more than they reward the quality of one’s work. I do understand why so many online who have built large networks and who excel at networking are so eager to switch to a system that rewards their strengths. But I don’t think that’s a good thing for science.

I’m going to suggest that you have two separate issues here, and if you put each on its own track, you might stand a better chance of solving them (or at least solving one of them).

Problem 1) Too much information, too much time and effort is spent digging through TOCs to keep up with the literature. This problem can be solved without tearing down the current system of science publication. Here’s how I would handle it in your situation. I’d set up my own private social network using something like Ning. I’d make everyone in my lab join that network. I’d try to sign on all my collaborators and any friends in science with very similar research interests to my own. This network would be based around creating a reading list, about each member constantly alerting all other members when they come across an interesting paper (or conversely, warning others away from a bad paper). I would make it a private network so everyone feels free to speak honestly, with no fear of reprisal, no fear of hurting their career prospects or the lab’s funding prospects. I’d reward participation through incentives like buying the lab beer if they reach a certain threshold of activity each week, if they reach the next level, then they get beer and pizza. You’d probably have to keep doing your 10 hours of digging per week in the early days to seed the network. Ideally, over time, you’d get others doing the same, and each of you would soon be able to cut out some of that work as the papers were being found by others already. Given the nature of research and the focus of each laboratory, this might be a more rewarding interaction than having a similar network on a much bigger scale. You’re keeping things focused on the subjects that really matter to your research group, and the privacy allows everyone to interact without running into the social issues that prevent participation in open networks.

Problem 2) Access and the profits made by third parties surrounding science research. This is a thornier problem without as direct a solution. PLoS are certainly doing exciting experiments that are pointing towards different ways of publishing research, but it’s early days, and this is a long term (think: decades) issue. If the approach taken by PLoS One is indeed the best way to go, the community will recognize it and move in that direction.

I do firmly believe that there are things that are worth paying for. For example, labs used to do all their DNA sequencing themselves. Now, they farm it out to centers. They used to make transgenic mice, and breed lines of flies themselves. Now these things are outsourced, because they’re routine, time-consuming and the cost:benefit ratio works out better to have someone else do it while the researcher spends their time more productively. I see publishing in the same light. Why spend time and effort doing something you don’t want to do, something that takes away from your “real” work, doing research? Why not pay experts to take care of that just as you pay an expert to generate an antibody against your protein? Just as each lab seeks out the best bargain, the most bang-for-it’s-buck for outsourcing, the same rationale should be applied to publishing. Find the ways that are most cost-effective, most beneficial at lowest cost (in both money, time and effort).

There’s a reason why people go into research. It’s not because they want to be editors, if that was the case, they’d have gone into publishing. Why force scientists to waste their valuable time? The system is far from perfect and we should continue to work to improve it, but a future where everything is done by an all-volunteer workforce of scientists means a future where a lot less science gets done.

Very thoughtful and astute observations and indeed tricky and constructive questions. Thank you!

Construction/improvement and enforcement of standards may be analogous to the de-facto standard of English as scientific language. Nobody dictated that everybody has to publish in English, must nowadays most people do, because it is in their own self-interest. Similarly, I’d rely on the usefulness of the standards, together with training and education, to do the bulk of the enforcement. You’re of course right that nobody will keep anybody from publishing a book on astrology in an Inuit native language. It’s just that chances are that nobody will read it. If the system is as useful as I think it will be, people will stick with the standard and a hopefully large enough portion of them will think about improving them (the incentives would be in place).Of course, the W3C is a very good example of standards not working (and one reason I don’t use IE if I don’t have to). Other standards work just fine. What is the difference between standards that work and those that don’t? You’re very right that this question needs to be answered and I think the language analogy might work here.

Karma and gaming: those are closely related issues. The whole point of having variable Karma points is that you would need to game all of them, all the time to be sure to get ahead. You never know in advance which position/grant agency will reward which behavior so your best strategy is to just do what you’re good at and be better than everybody else. Then the positions/grants, etc. where your expertise is needed will select you. Moreover, at the base of this current publishing crisis is exactly the solution for another of your gaming options: given the increasing size of the scientific community, asking a few friends to cite you(etc.) will be like a drop in the ocean. Moreover, it would only affect one of many karma aspects.
Thus, of course systems are more or less gameable. My digital utopia would be less widely gameable and more difficult to game than the current system and thus at least a significant improvement.

Also, good suggestions and again, very constructive, thank you.

Your first suggestion, I’m already doing. In fact, because it works so well, I want it to work not only for the section where I’ve been lucky enough to join such a network, but for all my scientific reading. So actually, your great suggestion is one of the reasons I’m dreaming of this utopia, because it extends what works in a small area to all my areas of scientific interest.
We already discussed the problems of implementing my utopia due to historic baggage…

As for your second suggestion, I agree that there are things worth paying for! However, one always has to balance that with the downsides of market economies: profit being the incentive rather than quality of service. Clearly, IMHO, it was precisely the incentive to make profit rather than keeping scholarly publishing up-to-date and ready for the 21st century which is one of several reasons for the current publishing crisis. Thus, in the light of the current situation, my hypothesis right now is that the experiment of outsourcing publishing to private companies is a failed experiment and we should start a new experiment. Notwithstanding, because I agree with you that some things are worth paying for, I would pay for my digital utopia, so if private companies can offer the sort of functionality I’m asking for, I’ll be the first to join. Right now, over 30,000 PLoS One authors (and UC 🙂 are voting with their feet and it’s a vote of no confidence.

Will TSK restore this confidence in the private scholarly publishing sector by alienating their customers with posts calling them noisy, chaff and whatnot? Usually, insulting your customers will not save your business. Conversely, in my defense, criticizing an employee of a service provider as incompetent, might get the employer to improve the service.

Scientists are sick and tired of the current system and flock in droves to the new publishing models. Why do you think this is? If you are correct that scientists don’t want to waste their valuable time by being editors (to which I would tentatively agree), how fed up do you think they need to be until they volunteer their time and become editors none the less, without any incentive whatsoever? How angry do you think they need to be before they sign up as AEs for PLoS One? The Ultimatum Game works the way it does because people are willing to punish at their own expense. Make the injustice strong enough and you’ll get people to give up a lot in order to punish. Posts like the last few from here add insult to injury and, if anything, will alienate your customer-base yet further (if that is possible) and make restoring confidence in a private scholarly publishing the more difficult.

The answer is the same when you ask why am I wasting my time posting here: the scholarly publishing system is easily the worst part of my job (and that’s even before dealing with the administration, which should tell you a lot!) and I try to take any chance of influencing the developments there.

P.S.: As to your comment above about shared data repositories: what you are saying will not happen is already going on all over the world. Bioinformaticians don’t make any experiments at all and rely exclusively on these repositories. they’re also the major driving force for more and larger shared data repositories and these initiatives are growing by the day. This particular area of science is currently exploding also in my field (i.e., neuroinformatics) and there are people data-mining all over the world complaining about how horrible the interoperability of databases is. Growth doesn’t even begin to describe the dynamics in these fields.

Bjoern–continuing on a fresh thread.

The English language dates to the 5th century. If that’s your model, are you willing to give your standards 1500 years to evolve to the point where they’re as widely accepted?

You haven’t explained how these standards are going to be generated, who gets to write them, how different interests get represented fairly, how they’re going to be enforced. If the absolute minimal basics of your plan can’t be determined, then it should be no surprise to you when Kent responds that your system depends on “magic”. The plan reminds me a bit of this Far Side cartoon. Clever ideas come cheaply. Real-world functional business plans are a lot harder to pull off.

Karma: No, you don’t need to game all of them. You only need to pay attention to the few that the funding agencies and the hiring/tenure committees care about. Just as many researchers are currently slaves to the impact factor, those same researchers will be slaves to the few metrics looked at by funding/hiring/tenure agencies. Even if you could stop the gaming, all other aspects of the karma score will be thoroughly ignored, as there is no motivation for putting in any effort. And that’s going to mean that your social filtering system, all the volunteer editing and reviewing you need will not happen. Period. PLoS can get by with a relatively small number of volunteer editors such as yourself because there are enough of you committed to the cause to support it as just one fish in an ocean of publishers. That approach does not scale to the entire ocean. There simply are not enough willing volunteers to cover the entirety of academic publishing. There is no motivation for contributing rather than freeloading. You’re relying on a system based on, to use Kent’s word again, “magic”.

As for gaming, I’ll repeat. It’s much, much easier to game an online review system than it is to game the current system which requires you to actually write papers that are good enough to get published in reputable places. I’m not sure how to make this any clearer: it’s easier to set up a fake e-mail address and give yourself a good review than it is to get a self-serving paper accepted by Nature or Cell. The latter is not impossible, but it’s a much higher hurdle to clear.

I’m glad your local network is helping, though apparently not enough to prevent the workload of scouring the literature that you complain about. Why do you think it’s failed to reduce the amount of hours you spend looking at TOC’s? I still think that smaller, focused networks are going to be more effective than bigger, open ones. Big networks like Mendeley seem to be seeing some usefulness as tools for small working groups, but most I’ve heard from don’t find the whole network terribly useful as a recommendation engine. Your mileage may vary, but I think that scaling things is likely to introduce more noise into the system than signal.

And as noted above, implementation of your system seems to have more serious problems than historical baggage. Once you have a well-detailed, thoroughly-planned functional system then that might be a problem. As you yourself note, really, you’ve just got dreams of your own personal utopia without much by way of practical implementation. Why worry about historical baggage when there are much bigger fish to fry?

Profit and quality of service are not mutually exclusive things. The best companies realize that they go hand in hand. The market needs to decide what is important and reward those who provide it. Profit can be an extremely good motivational tool for those seeking it to provide what the market needs. By all means, organize boycotts, suggest that your institution drop publishers who don’t provide the quality of service that you need. Don’t publish with them, don’t review for them, don’t read their articles. Patronize only the publishers who do things your way. Hit the others in their pocketbooks, that’s how you affect change. Replacing them with a system where you don’t have any such leverage strikes me as worrisome. As I’ve asked repeatedly, if your utopian system doesn’t provide a researcher with the service he or she needs, what recourse do they have? When you’ve been unfairly rejected, you can’t appeal to another publisher when there are no other publishers.

If you don’t think publishers are constantly experimenting with new technologies and new means of presenting information, you’re not paying attention. They may not be doing exactly what you want, but they are constantly trying out new modalities. Why do you think science publishing moved into the online world faster and more successfully than pretty much any other type of publishing? How many publishers have started social networks or tried to incorporate aspects of web 2.0 into their publications? How many publishers offer some aspect of open access business models? How many publishers have made their archives freely available to all? Go to a publishing meeting, most of what you’ll hear about is new tools for data visualization, semantic analysis and community interaction. I think the disconnect you’re having is that publishers actually do have to sweat the details, do have to worry about the practical implementation of new structures. It’s not as simple as wishing for utopia.

The Scholarly Kitchen is not meant to “restore this confidence in the private scholarly publishing sector”. It is a business blog, written by business people (and one librarian) for publishers and librarians. It is not a blog written for scientists nor is it meant as a public-relations outreach tool for customers. You gave up any chance at being offended or alienated when you wrote a public article calling Kent Anderson an “ignoramus” and telling him to “go fuck himself”. You don’t get to pretend to be the wounded party anymore. If you’re serious about changing the way scientists publish their work, then such unprofessionalism is likely hurting your chances of being taken seriously both by publishers (among whom Kent has earned respect for decades of hard, innovative work) and by your own colleagues. Is it any wonder that scientists “register active discouragement of blogging — a form of communication that in their eyes carries no stamp of reliability or prestige”? If you were speaking at a meeting and someone disagreed with your interpretation of your data, would calling them names and hurling profanity be an appropriate, professional response? Is this the proposed level of discourse in your utopia?

You are not a customer of the Scholarly Kitchen (it’s a free blog, do we even have customers?), and given your research interests, you are not a customer of the publication Kent runs. The Society for Scholarly Publishing is not Kent’s employer. In my experience, when a customer lodges a complaint through a childish, name-calling rant, it’s unlikely to prove effective. I have a Ph.D. and did bench research for 15 years. I’ve been in the publishing business for the last 10, and it’s taken me that long to really start understanding how the business works (and I still consider myself a novice). Reviewing papers for PLoS does not make you an expert on the publishing business. Dismissing the wisdom of people with decades in that business because they’re “English majors” smacks of closed-mindedness and fanaticism. You may not agree with Kent’s opinions or the way he presents them. Is it possible to criticize his ideas without personally insulting him?

If “scientists are sick and tired of the current system and flock in droves to the new publishing models,” then why are so few of them actually doing so? How many scientists serve on editorial boards for non-PLoS journals? Perhaps their opinion differs slightly from yours? Or are they just ignoramuses (ignorami?)?

As for shared data, I did note that there are fields such as informatics where these approaches work well. For much of wet bench science though, the results are likely to be quite different. There is no “one size fits all” approach that’s perfect for every field of research.

Not joining the whole discussion, but I wanted to respond to this:

And that’s going to mean that your social filtering system, all the volunteer editing and reviewing you need will not happen. Period. PLoS can get by with a relatively small number of volunteer editors such as yourself because there are enough of you committed to the cause to support it as just one fish in an ocean of publishers. That approach does not scale to the entire ocean. There simply are not enough willing volunteers to cover the entirety of academic publishing. There is no motivation for contributing rather than freeloading. You’re relying on a system based on, to use Kent’s word again, “magic”.

When I read that, my first response is that I have reviewed papers for the for profit journals, and I don’t recall getting paid for it at all, so either the check got lost in the mail, or I was volunteering for it. To my knowledge, EVERY journal relies on volunteer labor at the reviewer level (but I could be wrong).

I think part of the disconnect is that I, and I would assume other PLoS ONE AEs, don’t see being an AE as being that much more of a commitment to a paper than a reviewer might. It is certainly more work on average, but I don’t think it is so much extra work that it makes us ‘committed’. If PLoS ONE takes off, there will be a lot more people who think highly of it and are willing to join the Ed board and take on a little, but not that much, extra work compared to what they do as reviewers.

The problem is that the proposed system expands the amount of work you’d have to do as a volunteer massively, and expands the number of papers you’d have to do it on logarithmically. It asks for the entirety of scholarly publishing to be handled in this manner, and it asks for volunteers to take on all the hard and time consuming work that the paid employees of a publisher like PLoS do. How much time are you willing to spend arranging, tracking and organizing peer review for thousands of papers a week? How much time are you willing to spend copyediting, checking references, etc., etc. Are you willing to expand the amount of work you’re currently doing by 10 times? By 100 times? How many of your colleagues do you think would be willing to cut out research or working with their own students/postdocs to do so? Is that the best, most efficient use of a scientist’s time? Is that a way to get more, better research being done, or is it a timesink that will take away from time at the bench?

It’s unclear if something like PLoS One can scale even with the small amount of work you’re asked to do. Asking it to scale to a massive level and massively increasing the work each volunteer must do strikes me as an unlikely prospect. Most scientists have better things to do with their time.

David- are we talking about the same thing? I was not referring to the Bremtopia, but to simply scaling the Plos ONE model to handle more manuscripts. Which I think is eminently doable.


Sorry Ivan, I was talking about Bremtopia. I agree that PLoS One will likely be able to scale to deal with an increase in mansucripts, though it depends on the size of that scale. If you’re talking about it handling the entirety of scholarly publishing, then no. If you’re talking about moderate increases as the journal grows, then no problem.

The English language dates to the 5th century. If that’s your model, are you willing to give your standards 1500 years to evolve to the point where they’re as widely accepted?

lol 🙂 Looking at the growth of experiments like PLoS One and Frontiers in Neuroscience, I’d wager it’ll take a little less than that 🙂 English as a de facto scientific standard, by the way, is probably less than 60 years old, I would guess.

You haven’t explained how these standards are going to be generated, who gets to write them, how different interests get represented fairly, how they’re going to be enforced.

I had hopes my analogy would suffice, clearly it didn’t.
Generating standards: they’re already being generated and tested in experiments such as PLoS One and Frontiers in Neuroscience. The results of these experiments would be able to tell us which could be adopted as standards, if my utopia were ever to materialize.
Who gets to write them: who ever thinks they have a great idea. Then they will be tested and it will be decided if they should be adopted or not.
Fair representation: Just like any other scientific society, there are always people who like the sort of responsibility that comes with a public office and who are willing to dedicate time and effort. If these people wouldn’t exist, scientific societies wouldn’t exist. Since the standards are federated, the society analogy holds just fine. The standards would be analogous to society by-laws. Because the scientific community continues to grow, there will be no shortage of man power.
Enforcement: If you’re not adhering, e.g. to the standard of how to cite papers, you can’t be cited. If you leave the system, just like now, you’re effectively outside of the scientific literature. This is enforcement enough to keep the system running smoothly, just as today nobody really publishes anything outside the scholarly journals. If there is no enforcement required now, why should there be more required there? Given the current system, this enforcement question is starting to sound a little weird, as people are flocking to PLoS One and Frontiers in Neuroscience precisely because the current de facto standards have become so unusable. In contrast to today’s system, scientists actually will be able to adjust the standards. To me, it would make more sense asking you the enforcement question, really. The answer will be the same in both cases.

game the current system which requires you to actually write papers that are good enough to get published in reputable places

No. Gaming the current system just requires you to know the editors well enough, see my examples above. Taking editors out to dinners is decidedly easier than faking thousands of scientific papers who will consistently cite your work over several years without getting caught. Taking editors out to dinner is decidedly easier than faking the identity of thousands of researchers to leave enough comments and reviews on your papers. Taking editors out to dinner is decidedly easier than getting a botnet faking IPs in order to download all you papers of several years to fake a readership that isn’t even reflected in citations. Taking editors out to dinner is decidedly easier than doing all three of the above for at least a decade, which is what would be required to build a fake track record. If you think taking editors out for dinner is more difficult, we may have to agree to disagree.

Karma: No, you don’t need to game all of them. You only need to pay attention to the few that the funding agencies and the hiring/tenure committees care about.

In the worst case, you’d have to game, say, only the three things I referenced above. I still don’t see how that is less difficult than taking out editors for dinner, which works just dandy now, I can assure you. Whatever comes after IFs, it’s going to be better. Probably not perfect, but better. It’s a very low bar, really.
Oh, and please don’t tell me that the papers in the journals you referenced are any better than the papers in other journals, there’s just really no evidence for that. Given two equally ‘important’ papers, the guy who takes the editors out to dinner always wins. They probably even win if their’s is a little less ‘important’.

I’m glad your local network is helping, though apparently not enough to prevent the workload of scouring the literature that you complain about.

Oh, it is, it would be more that the hours I quoted above otherwise. It’s just a comparatively small section of my reading.

Big networks like Mendeley seem to be seeing some usefulness as tools for small working groups, but most I’ve heard from don’t find the whole network terribly useful as a recommendation engine. Your mileage may vary, but I think that scaling things is likely to introduce more noise into the system than signal.

I’m inclined to agree here, but isn’t this more of a technical, rather than a fundamental problem?

Profit and quality of service are not mutually exclusive things.

Of course not. Nevertheless, coincidentally, here in Germany, after a wave of failed privatizations, there are now several communities starting recommunalisation (does that word exist in English?) projects. So taking back a public good after market forces have failed to fairly distribute it are not at all unheard of. These communities, just as the scientific community, cite exorbitant profit margins and skyrocketing costs as the reason. For some things market forces seem to work, for others not. I’m not claiming I know that scientific publishing belongs in the second category, but current evidence seems to point in this direction.

As I’ve asked repeatedly, if your utopian system doesn’t provide a researcher with the service he or she needs, what recourse do they have? When you’ve been unfairly rejected, you can’t appeal to another publisher when there are no other publishers.

🙂 As I’ve unsuccessfully tried to explain repeatedly, you have the same problem now: if all reviewers in your field are convinced your paper is not ready to publish, you won’t publish it. I just handled a paper like this for PLoS One, where almost every expert declined because they had rejected the paper already before for another journal. This paper in the current system has wasted the time of a great many people. In my system, what would have happened is that you decide what needs to be done to fix the paper and if the authors disagree, you ask some more people and if the authors are alone in their opinion, one has to decide what the reason for rejection was. In the example I’m thinking of, what was missing was a number of extra experiments which the authors don’t have the man-power to do. It is not decided what is going to happen, but if the authors insist they want to publish these results anyway, I’m somewhat inclined to publish, because they are tantalizing data and it would be shame to not let anybody know about them. However, it would be a very cautious paper with few if any firm conclusions. At least the data would be out there to stimulate further experiments in the field.
Compared to what I envisage, PLoS One is tiny, yet, even for my small subfield, there are already several AEs to choose from. Thus, PLoS One is already large enough to allow fair treatment for everybody with plenty of recourse. Obviously, a much larger system will provide more, not less avenues for recourse.

Nevertheless, with space being cheap, I sometimes think it maybe wouldn’t be such a bad idea to have a repository for the rejected manuscripts, just in case, decades later, someone manages to find the odd gem of genius, far, far ahead of its time in there… 🙂

It is not a blog written for scientists nor is it meant as a public-relations outreach tool for customers.

Intentional or not, the title has scholarly publishing in it, your posts deal with a central aspect of scientists’ work and many of the authors have more or less close ties to the corporate publishing industry. Thus, scientists will read the posts here and discuss their content here and elsewhere, like it or not. I take the posts here as representative of the general attitude and level of discourse of the scholarly publishing business in general. That scientists pay attention despite their busy schedule and despite the fact that you’re not writing for them should tell you something about how eager they are to change something. You may not have meant it that way, but your blog is actually an opportunity to innovate. We’re doing it right now.

If you were speaking at a meeting and someone disagreed with your interpretation of your data, would calling them names and hurling profanity be an appropriate, professional response? Is this the proposed level of discourse in your utopia?

In the rare exceptional case where, say, a creationist were to stand up and tell a group of evolutionary biologists how to conduct their dating techniques, it may not necessarily be appropriate, but at least excusable 🙂
If I were a camera salesman trying to tell my customers they should put the digital camera they selected back on the shelf and instead pay twice the price for an antique film camera which isn’t even fully functional anymore, it may not be entirely appropriate for the customer to tell me to go fuck myself, but it would be understandable and probably not entirely wrong. 🙂

I have a Ph.D. and did bench research for 15 years.

Which is probably one of the reasons why your arguments actually make sense and do bear on the reality of the working scientist.

If “scientists are sick and tired of the current system and flock in droves to the new publishing models,” then why are so few of them actually doing so?

Probably because those who do actually flock to the soon to be largest journal in the world, PLoS One, or the similarly successful Frontiers in Neuroscience system. Their growth is exponential and constitutes exactly the kick to the pocketbooks you asked for (and if not already then soon).

I’ll be off for a presentation in Barcelona until next week, by which time this thread has probably thoroughly run its course. It was a pleasure debating with you, David, as it has forced me to think yet harder about my utopia, how I would like publishing to be and what of that might be realistic. I love well-founded opposition and critique and yours certainly was.
In the end, time will tell what sort of publishing the world will end up with, eventually. So far, the defenders of the current system are few and far between, at least in my scientific fields.

Thanks again and best wishes.

I may be a better translator than Kent in this case (or perhaps just less blunt), but essentially we are saying the same thing. Publishers are constantly besieged by clever people who feel they have the path to the future, the plan that’s going to revolutionize scholarly publishing. These sorts of ideas are a dime a dozen, and the response publishers give back is usually, well, that’s an interesting idea, put it into practice, prove that it really works and then we’ll talk. I’ve taken a little more time than Kent, but it is essentially the same message. We see a lot of plans like this. Every startup out there is begging for partnerships. Your plan in particular is lacking in detail and is unrealistic in its current state. I think you realize this as you continually describe it as a “dream”. Until it becomes a lot more concrete, you won’t find many busy publishers willing to bother with much analysis. Just as a scientist approaches any unpublished results with a skeptical mindset, you’ll have to accept that people in the publishing business are likely to do so as well.

The idea that a set of standards can spring from the collective unconscious, be universally accepted, be written by anyone who wants to write them, be federated but not centralized, be organized but completely distributed, have no enforcement power yet be strictly enforced (except when someone decides to change the standards which anyone can do at any time) doesn’t seem like a real-world workable plan. Every other sentence seems to contradict the one before it. Frankly, it’s either exceedingly naive, or it’s deliberately insulting to everyone in publishing and specifically to the hard working people at PLoS. They’ve had to do an enormous amount of work, overcome major hurdles, think creatively and grind away for years to find a working model. To suggest that one could do the exact same thing as they have done, but on a much more massive scale, with no organization, no committed full-time employees, no hard work, no detailed planning and no funding belittles their efforts and shows a fundamental lack of understanding of what it is exactly that they (and every other publisher) does.

Enforcement: “If you’re not adhering, e.g. to the standard of how to cite papers, you can’t be cited.” Except if you decide to change the standards, as anyone who wants to can rewrite them. So if you’re a pharma and you want to change the rigor of review, you can do so any time you want, because you’re as in charge as anyone else. And you can’t be cited, unless someone decides to cite you. You’ve got a system where you’re assuming every scientist on earth will agree to the exact same set of rules and follow them religiously, and you’re building in the ability to completely ignore those rules at will. That doesn’t sound like a system to me, that sounds like chaos.

Gaming: If all it took to increase one’s own standing was to take a journal editor out to dinner, I’d be eating a lot better than I am. You left out a crucial step in the process. Let’s review:
Current system:
1) take an editor to dinner
2) do years and years of research
3) write a paper on that research
4) send it to your dinner companion
5) get it past a set of independent peer reviewers
6) gain stature

Your new system:
1) set up a fake e-mail account with Yahoo
2) give yourself a high rating

Which of those is easier? You do make a point that networking is valuable to any researcher. It likely does help you to establish good working relationships with journal editors. It also helps you to establish good working relationships with other scientists. If you’re trying to get your student into a competitive course, knowing the instructor is likely going to be helpful. If you’re trying to get a job at a university, knowing some of the faculty there is likely to be helpful. None of this is a guarantee of anything, and if you really think journal editors readily compromise their jobs for the price of a meal, you’re fooling yourself. We really like our jobs, and most of us would like to keep them. We’re not going to throw away our careers no matter how tasty the creme brulee. Come buy me dinner sometime, I’ll be happy to reject your paper over dessert.

Forget gaming though, let’s assume you can completely eliminate it–you still haven’t explained why anyone would do any work in your system. Right now, no one comments on papers, no one rates papers, very few blog about papers. There is no incentive to do any of that work. Your system provides no incentive but is completely reliant on it happening. Another example where magic seems to be required.

Recourse: your example is a great one, showing why it’s so valuable to have a varied ecosystem with lots of different types of publishers. PLoS One is a useful part of that ecosystem. If it’s not appropriate for one type of journal, or runs into problems with one set of journal editors, it can find a home in another journal. In your system, every paper that’s rejected is immediately going to be resubmitted to another set of editors from the same journal. If they reject it as well, it gets immediately resubmitted to someone else, clogging the system and massively increasing the workload. Why revise your paper, just resubmit it to someone else. Eventually, it’ll come out. If you allow endless resubmissions to the same journal, then what was the point of peer reviewing it in the first place? Why not just accept everything and save everyone the bother if that’s going to be the end result? Or better yet, if I’m an author and I get rejected, I just make up a new standard (which is allowed under the system) and accept my own paper. Problem solved.

Let me state this emphatically: Monopolies are bad for content creators. If you have one journal with a firm set of rules, then you have a monopoly. If your rules are flexible, then you have chaos. Neither is an improvement over our current state of publication.

Scientists and anyone else are welcome to the conversation here (though it is moderated to keep things civil). Scientists are not, however, the “customers” of the blog. We do this in our spare time. We are not paid to do this. We have no customers, nor are the scientists the intended audience. We are not writing to ““restore this confidence in the private scholarly publishing sector”. If that’s what you’re looking for, you’re reading the wrong blog.

Acting like a child is not excusable under any circumstances if you are an adult. If you are a professional, then it’s even less excusable. In an age where college students have learned to stop posting their photos of alcohol and drug abuse on their Facebook pages, one would think that an employed scientist would at least be as smart regarding his or her public reputation. If I’m on a hiring committee and I need to narrow down the 400 or so qualified applicants for a job, odds are that I’ll be able to cut out all the applicants who have publicly told an executive at the New England Journal of Medicine to go fuck himself. It’s endemic to the perpetually immature culture of the internet, where being snarky and insulting is preferred to being intelligent and informative. This is likely part of the reason why so few scientists read science blogs or take the medium seriously.

Attacking the messenger, rather than the ideas is a sign of a weak, indefensible argument. You repeatedly state that you don’t want anyone without a degree who hasn’t done research telling you how to handle your business, yet you are completely willing to tell professionals in other fields how to do their business despite your near complete lack of experience. Is this perhaps somewhat hypocritical? Why are you allowed the leeway to instruct experts in their own fields, yet no one is allowed to do the same for you in your field?

You don’t see the “defenders” of the current system because they’re actually hard at work in their laboratories exploiting the current system, rather than spending all day writing blogs, chatting on FriendFeed and dreaming of utopias. I was recently interviewed for an article on Open Notebook Science because I was the closest thing the author could find to an opponent. I told her my opinion–it’s a great idea, really the ideal way science should be done, but very difficult to implement given the nature of how funding and jobs work in science. She asked me where the opponents to Open Notebook Science were, and why she couldn’t find their blogs, which is a completely laughable question. If you’re an advocate, pushing a cause, you need to stand up to have your voice heard. If you aren’t pushing for change, if you aren’t advocating a revolutionary overthrow, you have no incentive to waste your time discussing ideas you think are pointless. Why would you start a blog to talk about something that doesn’t interest you? I find the car I drive to be adequate. Should I start a blog defending the adequacy of my car? Is that likely to be a fun and useful way for me to spend my time? And that’s why you’re not seeing the “defenders of the current system”. They see no need to defend anything. They’re more interested in being scientists, in doing research than they are in talking about publishing business models. Their numbers are vast but they are silent and invisible when compared with the more visible and vocal minorities.

In the midst of this extensive discussion, I noticed that (Herr Dr.) Bjoern Brembs is in serious need of a librarian!

Bjoern, Google Scholar shouldn’t be your first step, particularly since you have access to Scopus & Web of Knowledge. They’re very useful (and expensive) databases that could probably serve you better with some training and/or assistance. It looks like Frau Dr. Gabriele Menzel at Charite could help you with search strategies. If not, I’d guess that she could direct you to the correct person.

Dear Laura,
Are you, by any chance, a librarian yourself ? From the point of view of a scientist, who knows how to use Scopus and WOK just as well as Google Scholar, I can assure you that I find Google Scholar VAAAASTLY SUPERIOR to those other two extortionately priced services.
First, it is free, and I can thus use it from anywhere, whilst I have to be at work, within the secured domain of my wealthy institute to use the other two.
Second, it points to any version of the papers it identifies anywhere on the web, including the self archived ones. In fact, I could not have carried out the last bit of research I have just completed without Google Scholar because it has allowed me to access many pdfs of old papers and book chapters that I would never have found without it. It has been soooo useful that I have actually felt compelled to acknowledge the people running Scholar, and JSTOR, in my manuscript.
Third, it finds many more citations of papers than the other two, and references them much faster too.
Fourth, it is quite tolerant of typing mistakes, both for the query, and for the results, which can make a world of difference in the breadth of the results of your quest.

All in all, I would say that using Google Scholar does indeed have a little bit of a magic feel about it, but that is probably related to the fact that I do not understand how it works 😉

I am a co-author on two published PLoS ONE papers, and I can tell you that one of the main reasons we submitted to PLoS ONE was because it is FAST while also maintaining a high quality review system. Impact Factor and the cost of submission were not issues.
I agree with Ivan Baxter’s previous post that the high impact factor may be more of a curse than a blessing if the result is a wave of submissions that bog down the publication process (from those who chase impact factors but won’t in turn review for PLoS ONE).

I seem to be coming very late to this discussion, but just wanted to point out another likely answer to the question raised in the title of the original post. An impact factor of 4.35 for PLOS One may well prove to be a curse for many many journals with impact factors in the same range or below. Many scientists will ask themselves why they should burn time and energy on fighting to get published in a field-specific journal with lower IF, if they can just submit to PLOS One and have a good chance of getting published. So we might be seeing the first steps towards concentration of the literature from many fragmented small journals to one massive PLOS One style repository plus an array of a limited number of “higher impact” journals in each field.

Three alternative factors must be created:

1) Diffusion factor (papers availability)
2) Social impact factor (importance for the society)
3) Ecological impact factor (importance for the environment)

So, PIs with grants don’t make up data? Solid peer review is not needed because you have a grant? What a pile of BS. Many PIs get grants not because their science is any good, but because they have buddies at various levels in the system. Reducing peer review will only result in worse science being published and in the dissemination of bogus research.

It seems to me that rather than describing PLoS’s innovations as “light review”, we should see them as shifting from a traditional pre-evaluation model to a post-evaluation model. That is, traditional publishing (printing a journal and mailing copies of it to all subscribers) was forced by its high distribution costs to impose a very strict pre-filter on its publication decisions. Let’s call that “pre-evaluation” — an attempt to *predict* what papers the overall scientific community will ultimately find valuable. By contrast, if the costs of making papers accessible are tiny, this is unnecessary and we can simply switch to a post-evaluation model, i.e. the community itself rates the value of each paper, both directly (by entering ratings on PLOS’s page for that paper) and indirectly (i.e. by using it and citing it). I wrote a lot more about this model and transition here:

I recently submitted my first paper to Plosone, the paper was reviewed and currently the revised submission is under review. The paper was reviewed by two reviewers and in my opinion was reviewed very rigorously. We had to perform a few more experiments. I don’t know yet whether the paper will get accepted or not. I am sure if I would have submitted it to one or more of the so called niche journals with slightly higher impact factor, it would have been rejected and the comment would be “it is not significant enough for this Journal” Not a whole lot of people work on my organism. fewer people working means fewer citations. when I was a postdoc, my lab PI would never reject a paper from certain laboratories, no matter how insignificant it was.

The paper was accepted a few days back. I am extremely surprised at the rigrous review process. Hope the editors and the reviewers keep up the good work.

If PLOsone model is a failed one, why are all the so called established journals following this. Look at Mbio and now the open acess journal from Nature.

Who said PLoS ONE is a failure? Indeed, I described that they are an example of astounding success; however, that success may come with unintended consequences for the publisher.

Comments are closed.