Electronic red megaphone on stand.
Image via Wikipedia

There seems to be a trend emerging — open access-friendly pieces appearing in major newspapers, pieces apparently originating from London.

A couple of weeks ago, George Monbiot published a rant in the Guardian about open access, one which even open access advocates felt was a bit shrill. We dealt with that piece here, in a post that yielded the most comments we’ve ever had, and some of the best. Now we find another bit of open access spin in, of all places, the New York Times. The piece was written by D. D. Guttenplan, a London-based correspondent for the International Herald Tribune, which often shares content with its parent, the New York Times.

Guttenplan also blogs for the Guardian.

To illustrate how tilted the reporting in the New York Times piece is, I’ve taken some quotes from the article to show how a journalist with more perspicacity might have treated the same material:

  • “Unlike paper publications, PLoS One has no restrictions on the number of articles it can accept.” — Endless capacity is presented as a clearly positive innovation. In some ways, this is true — having no restrictions on the number of articles it can accept has led PLoS ONE to become a bulk-publishing powerhouse, boasting of being the largest journal in the world and not flinching from an acceptance rate of nearly 70%, all while this high volume of author charges was a major driver in PLoS’ first year in the black. But there is another side to the story, one Guttenplan doesn’t really touch. Filtering content is about restrictions, limits, and so forth, all to save readers work and help to ensure interest and relevance. Consciously changing criteria with the rhetorical shift from “scientifically sound” to “methodologically sound” in order to justify a business model predicated on acceptance instead of rejection could easily be portrayed as cynical. The fact that a sentence this clearly problematic isn’t criticized in the New York Times’ piece suggests gullibility, if not outright complicity, on the part of Guttenplan.
    • Alternate version: A relatively new journal, one which requires authors to pay to be published, has eliminated traditional restrictions on capacity, accepts 7 out of 10 papers it receives, and has no clear audience other than the amorphous world of “science.” By pumping through as many papers as possible at more than $1,300 each, this journal has allowed its parent organization to generate a 21% profit after only four years of backing this new style of limitless journal. All the while, this organization’s adherents complain about the high costs of publishing and the profit-taking of major publishers, apparently in an effort to tilt more business their way.
  • “‘We don’t ask how important the work is or whether the findings are new,’ said Dr. Patterson, a geneticist who worked at Oxford University and Stanford University before going into publishing. ‘We think there are other mechanisms that can decide those things.'” — Again, same problem as above — blatant promotion of a lesser standard, foisting the work of filtering off on others, and evading responsibility for everything but the most basic type of review allowable. “We don’t ask how important the work is or whether the findings are new”?!? A skeptical journalist would have torn that apart. Why don’t you as how important the work is or whether the findings are new? Are those questions really that hard to ask? Are you saying PLoS ONE publishes redundant, unimportant works? What is its role? What other mechanisms exist to vet what PLoS ONE pushes onto the market? Why does it push 70% of what it receives on the market? Is it just to make money?
    • Alternate version: Most publishers take responsibility for publishing materials that will be important or new, placing a premium on findings that fulfill these criteria and rejecting the rest. This “cost of rejection” is a major contributory factor to the cost of running respected journals, many of which have acceptance rates well below 20%. Yet, open access adherents play by different rules. “We don’t ask how important the work is or whether the findings are new,” said Dr. Patterson, a geneticist who worked at Oxford University and Stanford University before going into publishing. “We think there are other mechanisms that can decide those things.” By flooding the market with papers that don’t meet the criteria of importance or novelty, and assuming someone else will take care of those things, one is reminded of factories spewing pollution into rivers and hoping someone downstream will clean up after them.
  • “Whoever pays the bills, publishing is not free. . . . According the Mr. Suber converting to open access ‘will involve some cost shifting, But [sic] also considerable cost savings’ for libraries and university budgets.” — Publishing is not free, yet most of the article is about how access should be free and how radical changes in economics, made possible by the Internet, may make this feasible. Yet, as many of us know, online publishing is expensive, retaining all the fixed costs of print publishing, and adding new fixed costs that often exceed the variable costs print incurred. The lack of viable open access publishers — those not needing grant support, cross-subsidization by traditional revenue streams, or government support — would make a more skeptical reporter ask some questions. Where is the evidence of this? Is publishing 70% of the papers you receive using a lower bar of acceptance the only way to make it viable? Is that really saving anyone money? What about the indirect costs of discoverability? What about the filtering costs you’ve explicitly foisted off on others (see above)?
    • Alternate version: Publishing is not free, open access advocates admit. Peter Suber, an open access advocate of long standing, asserts that converting to open access “will involve some cost shifting, but also considerable savings” for libraries and universities. These assertions have not been proven. Meanwhile, based on unfounded assertions like these, funds to support open access publishing have been set aside at many institutions, drawn from library and academic budgets. But many go untouched or underutilized, plunging these funds into the shadows and depriving libraries further at a time when providing what their patrons want most is difficult. Meanwhile, sustainable open access publishing seems to be a numbers game at some level — either cross-subsidized by other revenue sources or dependent on low rejection rates to ensure a sufficient stream of paying authors.

There are other surprises in the New York Times article, one of which is that a spokesman for Elsevier declined to comment when contacted. Really? If there really was a concerted effort expended to get hold of someone from Elsevier and this was their (lack of) response, I’m surprised. Elsevier should have these responses memorized by now and know enough to answer calls from reporters. Elsevier’s apparent lack of a current response left Guttenplan to quote from 2004 Parliamentary testimony, long after some people have left the organization’s they’re associated with in the piece, creating yet more of a throwback aspect to his writing on the topic. He could have at least updated those simple factual matters.

Guttenplan also cites the Scholarly Kitchen, but not only is the mention not even cursory in its portrayal of discussions here, no effort was made to contact anyone here for a quote or comment.

It seems most of the effort was expended getting quotes from open access advocates, another indication that its true intent wasn’t objective journalism.

The arguments of the open access advocates aren’t really the point here — they’re pretty much retreads of the arguments of old. What’s new is the level of activity and prominence of the open access PR machine — possibly a London-based branch of it. In just a couple of weeks, we’ve witnessed two unabashedly pro-OA pieces written by two different people — both of whom have a record of siding with the downtrodden and lost causes — published in major newspapers (the Guardian, the New York Times), and composed of ideas, verbiage, and arguments more than a decade old. If it’s coincidence, it’s one heck of a coincidence.

Is it possible we’re seeing how PLoS might be spending its surplus — on public relations, just like they did in the early days? If that’s the case, get ready for an onslaught of the same kind of messaging we heard in 1999. However, this time, open access is mainstream, profitable, and rather well-understood.

If only the newspaper editors were providing a better filter.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

View All Posts by Kent Anderson


68 Thoughts on "London Calling — Open Access PR Wends Its Way From London Into a Major US Newspaper"

There is also a glaring and slanted factual error in the NYT piece. It says “We are trying to attract papers which might previously been published in leading journals like Science, Nature, or Cell,” said Mr. Kiley, noting that the Wellcome Foundation, like the National Institutes of Health in the United States, already requires the researchers it funds to publish their results in open access journals.”

NIH does not require OA publishing, just open archiving. Researchers can publish anywhere.

The lack of viable open access publishers — those not needing grant support, cross-subsidization by traditional revenue streams, or government support — would make a more skeptical reporter ask some questions.

What about BioMed Central? Although now owned by Springer, BMC itself is viable and profitable without subsidies, having just published it’s 100,000th article.

Show me something proving that BMC is profitable on its own, and doesn’t rely on Springer resources to generate its margins, hiding overheads inside a larger entity. I’m betting the reason BMC isn’t standalone anymore is because it saw the writing on the wall and wanted the shelter of a bigger organization.

Kent, I’m glad you brought this up. One of the ways to help make OA work financially is to reduce editorial and production costs (the other is to increase the number of papers published and hence the amount of author fees collected). Larger organizations, those that publish large numbers of journals, have a great advantage of scale. For a publisher that puts out more than 1,000 journals, costs can be amortized over those publications–each need only share a small percentage of overhead. Because they’re buying in bulk, each journal’s individual costs are lower. But for a small publisher, say one with fewer than 10 journals, each journal must cover a larger share of total costs, and those costs are higher because they’re buying on a smaller scale.

This presents something of a paradox. A drive toward a literature that is entirely OA inherently favors the largest publishing houses over the smaller, not-for-profit, academically-owned publishers, because scale becomes even more important. The question then, is whether an increased reliance on OA means further conglomeration of the scholarly publishing world among a small number of very large corporations?

David, if your argument on the economies of scale is correct, then open access publishing actively works counter to the revolutionary, anti-hegemonic “take back publishing” narrative.

To answer the question by analogy… How many Internet Search engines are there with >10% market share? This seems to be one of the basic economic principles of the digital age, you are either incredibly niche and thus able to charge a price to cover your costs and profit etc (though at the very considerable risk of fashions changing and killing you) or you race to gain size as fast as possible in order to starve any competition. We are back to our old friend the network effect. The middle ground seems to be a very dangerous place to be.

Of course you could insist that authors back their publications importance by introducing a submissions fee (non-refundable in the event that the paper is rejected, incorporated into the publication fee upon acceptance of the paper).

The CEO of Springer is on the record stating that BMC is profitable. See the interview link and quote below:

Q: When Springer bought BioMed Central, the press release quoted you saying, “This acquisition reinforces the fact that we see open access publishing as a sustainable part of STM publishing, and not an ideological crusade.” When I spoke to BioMed Central’s founder Vitek Tracz recently, he too used the word “sustainable.” Neither of you used the word “profitable.” Is BioMed Central profitable?
A: Yes, BioMed Central has a very healthy margin, more than double digits. It is not marginally profitable but a very sound business.

I’m going to suggest that there’s no grand conspiracy here, but instead a combination of two factors:

1) Newspapers have certain tired but reliable tropes they like to roll out periodically. Stories that evoke outrage are good copy, and in the age of the internet, stories that generate comment and linkbait are highly desired. That’s why, even in an age where Apple is one of the most successful companies on earth, we still see the same sorts of stories about Apple’s imminent doom that we saw decades ago. These sorts of editorials have been reliable attention-getters for more than a decade.

2) Science Online London (http://www.scienceonlinelondon.org/), an annual event, took place earlier this month. It is traditionally a gathering of the flock for those deeply committed to open access. I’m assuming it garnered some local press interest in the UK, hence this recent rash of editorials focusing on the faithful with little input from those with different perspectives.

Ah, yes, Science Online London. I should have thought of that. I’ll bet that re-energized the flock. Good call.

Except the Monbiot Conflagration (as it should henceforth be known) was written and published prior to Science Online London. Also, the points of view expressed at the event are a damn site more nuanced, at least that was my impression based on the live stream.

Fair enough, I was looking for a geographical reason and thought that might apply. Phil’s suggestion above may be more relevant.

I think that the formation of the UK Open Access Implementation Group (http://open-access.org.uk/; supported by, among others, PLoS) is far more likely to have had an impact on any up-swing in pro-OA coverage coming out of London. The UK Open Access Implementation Group has been lobbying very hard – including directly to David Willetts, Minister for Universities and Science in the UK government.

Thank you for this thoughtful and so very true piece. I am currently attending COASP (Conference of Open Access Scholarly Publishing) in Tallinn and today was dedicated to so-called mega journals – journals that, like PloS ONE, accept across all disciplines and also along the quality criteria of PLoS ONE: enter BMJ Open, SAGE Open (set to fail though as the social sciences are not as liberally funded as the life sciences), NPG Scientific Reports (which ironically struggle with their reviewers, since they would like to apply normal NPG quality criteria and repeatedly need to be brought into the new low-quality line), G3 and others. It is blatantly obvious that the economic success of PLoS ONE has whetted the appetites of other, even traditional publishing houses who want to have their share of the open access funds cake. The fact that scientific importance and impact fall by the way side just goes to underscore the business modell by making use of economies of scale – you have the infrastructure, you have the publishing savvy, you have the connections and now you can make a lot money by publishing an even greater lot of lesser important papers. What is particularly repulsive to witness (at least for me, a whole-hearted scientist) is the way that Peter Binfield (PLOS ONE’s managing editor) is looked (and sucked) up to like he re-invented scientific publishing by a doting audience blinded by his window-dressing arguments.

On the other hand, at least BMJ is open about wanting to participate in revenue streams and not wanting to let good manuscripts (what with the regular BMJ rejection 93%, a lot of good stuff gets rejected) and the money that can be milked from them go elsewhere. Even if they put a spin on the story by wanting “to ensure that any well-conducted study has a home where it can be fully reported”.

Incidentally – not wanting importance or impact is apparently something at the very heart or beginning of open access. At least according to Matthew Cockerill, Managing Director of BioMedCentral. He proudly proclaimed today that this had been a major criterion in the setting up of the first journals of BMC, and particularly of the BMC series, with the exception of BMC Biology and BMC Medicine. Maybe it was not without reason Springer had to come to the financial rescue in 2008.

And so: Given that open access is being promoted by policy makers all over Europe as the golden future of publishing, we will need to brace ourselves against a tidal wave of low-interest and low-impact research published with tax payers money that would far better be invested in quality subscription journals. This is, I guess, an unforeseen legacy of the “publish or perish” pressure exerted on scientists today.

I find this comment a bit puzzling since nobody named Sarah Bleicher actually participated in COASP. So, this was either posted by someone who didn’t hear any of the presentations given at the conference and is just making this up based on speculation, or someone who did attend the conference but is unwilling to use their real name and stand behind their comments.

Pen names are frequently used in blogging. It is no reflection on veracity. Or maybe she sneaked in, something I sometimes like to do.

Low interest, low impact research? PLoS One has an impact factor of 4.41 in the JCR2010 despite a 70% acceptance rate. the JCR is far from perfect but it does indicate and average of over 4 researchers in the two years after publication thought enough to cite PLoS One articles.

There are plenty of high quality OA journals with, high impact factors and plenty of crappy ones, just like subscription journals. It would be nice if there were thoughtful discussions instead of the nonsense on this websites like this on what roles (as apposed to role) journals play in scholarly communication and what configurations for selection presentation archiving etc as well as funding make sense for best filling the various roles.

PLoS ONE’s impact factor is low for its cognates — mainstream science journals. If it were more selective, it could easily have an impact factor in the double digits. You could just as easily assert that PLoS has decided that revenues are more important than impact.

I don’t know where you get your data but it does not appear to be the Journal Citation Reports. PLoS One publishes across a wide range of biological and medical fields. I just pulled at random the data on dozen or so areas including BIOLOGY; BIOPHYSICS; BIOTECHNOLOGY & APPLIED MICROBIOLOGY; CARDIAC & CARDIOVASCULAR SYSTEMS; CELL & TISSUE ENGINEERING; CELL BIOLOGY,MEDICAL ETHICS; MEDICAL INFORMATICS; MEDICAL LABORATORY TECHNOLOGY; MEDICINE, GENERAL & INTERNAL; MEDICINE, LEGAL; MEDICINE, RESEARCH & EXPERIMENTAL. Not a single one of these fields has a median impact factor as high as PLoS one.

You can assert what you want about PLoS One but it is not based data.

There is a place for high impact very selective journals and there is also a place for journals that publish research that is sound but lacks whatever they look for in high impact journals.

In general does it really make sense to toss out 80% or 90% of the submissions? That’s a pretty constipated wasteful process that slows down science. It all boils down to the quality of the submissions but a 30% rejection rate and very fast turnaround seems to make a whole lot more sense. Given how widely read and the average cites for PLoS One it seems the review process works pretty well in selecting good quality science and weeding out the real junk. Sure you can (and probably will) point to an article here and there in the journal that is garbage but you can also do that in almost any other journal including the high impact journals.

My statement was that PLoS ONE doesn’t compare well to mainstream science journals for impact factor.

Your data compare PLoS ONE to set of heavily right-skewed means (I eyeballed the data, and couldn’t see one over 3, so this is a heavily skewed mean). There are a lot of journals that garner few citations and have low impact factors. They’re small, highly specialized, or very clinical/practitioner-focused.

However, please remember that the journals skewing the median down are those with either a very narrow focus, a very small audience, or very low-quality papers (or any combination of the three). PLoS ONE has marginally more impact than a bucket consisting mostly of those. Point taken.

I think David Solomon has a point, but only up to a point. Science is what science does and if there is a useful role for author pays let us find it. I personally think every paper should be published somewhere. But if it requires policy mandates then that is not innovation, it is coercion, something science does not want or need. Fortunately I doubt the US will every adopt mandatory author pays OA. It violates free speech.

I have never heard of anyone mandating author paid OA. Some institutions mandate depositing preprints in their institutional archives and some some funding agencies mandate depositing copies in some type of repository, usually allowing some embargo period. It is not about innovation, it is about dissemination and the right of a funding agency who pays for research or an institution that pays the salary of a researcher to insist the finds of the research they funded is readily available.

Kent, PLoS One has a higher impact than over 93% of the journals in the Science JCR 2010. I think it is a stretch to say that only 7% of the journals in the Science JCR are mainstream. That is a side issue anyway. The point I was making is that PLoS One is heavily accessed and cited despite a fast review on solely technical criteria suggesting there are a lot of researchers that find the content valuable. The fact that it is publishing thousands of articles suggests authors like it as well. I suspect it is because they can get their research published quickly as long as they have done credible job rather than waiting months to find out their manuscript was rejected based on a process that is fairly unreliable and biased against novel research. (http://www.nature.com/nature/journal/v425/n6959/full/425645a.html)

Well, I’ll wager that at some point, PLoS ONE’s impact factor will be shown to be based largely on the fact that the papers in it are sketches cited by some final works. In fact, it’s interesting to note that corporations seem to have caught on to the value of PLoS ONE in putting together their publishing programs. Search on a major company name, and you’ll find plenty of research sponsored by pharma in PLoS ONE. As one study I found noted, “Pfizer had a role in study design, data collection and analysis, decision to publish, and preparation of the manuscript.” That phrase “Pfizer had a role” is amazing to see — I don’t think I’ve ever seen that before. Publication planning is hard to differentiate from legitimate science, but a study not too long ago showed a 2x preference for industry to pay to be published when given the choice by a hybrid OA journal. Those citations may not be as clean as one might hope. Journals with looser criteria and a payment scheme based on author-pays are vulnerable to being played like this. Not to say that traditional journals don’t get duped from time to time, but it’s this cellar door into the literature which often makes publication planning work. So, maybe PLoS ONE consists of sketches of later studies, attracting citations; maybe it consists of business plans carried to fruition later. Maybe it’s a little of both.

David S: It is unfortunate that you chose to reply to Kent in your reply to me. I am assuming for this discussion that “OA journal” means author pays. If it also includes journals that make old articles available after 6 months or a year then it is wildly ambiguous, which may help explain some of the incoherence in the debate. In any case there would be serious free speech issues if the US federal agencies tried to restrict publication to these “delay OA” journals as well. Writing and publishing are not part of the contract. Moreover, as I point out in my recent SK article, federally funded research is available via the final report.

I didn’t mean to confuse anyone. Yours and Kent Anderson’s comments were related to my one comment so I rolled my responses to both of you into one comment hopefully making it clear who I was responding to. If not, I apologize.

I simply I have never heard of anyone suggesting a requirement for publishing in author paid OA journals. The only current US federal requirement I believe is for the public health service (maybe just the NIH) which is an assurance that is part of the federal government’s contract with the institution accepting federal funding. It simply requires that published reports based on the funded research will be deposited in PubMed Central within 6 months. Researchers are free to publish in any journal they want. They ARE (actually their institution is) contractually obligated to meet that requirement along with every other one of the assurances in the contract. The federal government wants the results of the research they fund to be easily accessible and permanently archived in their public archive. I don’t think that is unreasonable nor is it violating anyone’s free speech rights. What it does do is force publishers to live with the policy if they want to accept manuscripts based on research funded under this policy.

David S: Perhaps you missed the beginning of this thread. In the very first comment I objected to the statement in the NYT piece that “…the National Institutes of Health in the United States, already requires the researchers it funds to publish their results in open access journals.” This is more than a suggestion, it is a claim, albeit false. But even false it is certainly a suggestion.

Moreover, a least one of the UK Research Councils does require publication in open access journals, which is also more than a suggestion. This is in fact a major ongoing policy issue, one that is being actively discussed in US science policy circles today. Some people do want to restrict publication to OA journals. This will probably be a Constitutional issue if it is ever attempted.

Also, for those not familiar with US science funding, NIH funds just over half of the total $60+ billion/year in basic research. It dwarfs all other science agencies.

Another good post Kent. I, and Elsevier for that matter, tend to stay in the background in these articles as we favor communicating with our communities directly instead of through the media. But as the spokesman in question I’m happy to clarify that it was actually quite deliberate to be reported as declining comment. Let’s just say there are times to engage in an article, and times to not, and this just wasn’t one of those times to engage.

When talking to the reporter I did say that even if we were to comment, we’d never be as thorough and articulate as the folks over at the Scholarly Kitchen have been in response to the Guardian article. So while the reporter may have come across this blog anyway, it may have been my recommendation that led him here.

In the reporters defense, I think it was fair for him to pull from your blog without calling, and he did make a professional attempt to get Elsevier’s response to the article on the record. But otherwise I tend to agree with the rest of your analysis in that it was an article we’ve all read 100 times before. Thank you.

I encourage everybody to forward this post to Arthur Brisbane, Public Editor (“ombudsman”) of The New York Times. He’s public@nytimes.com. Tweeting would be helpful too. Mine has the info you’ll need: “Everybody should forward @kanderson’s OA post on @scholarlykitchn to Arthur Brisbane @thepubliceditor of the @nytimes, public@nytimes.com.” If he gets enough feedback we may find a followup in the Sunday Review, the NYT’s excellent Sunday section, in which Brisbane’s job is to keep the NYT honest.

Reading Scholarly Kitchen is like running into that stodgy old emeritus professor in the hall who refuses to accept a new dogma or theory posed in his field. That same professor is likely to use one example counter to the new theory as the reason it cannot possibly ever be true.

Open Access is simply an evolution of the 300+ year old publishing model. And it will continue to evolve. It’s in a massive transition state into a business model that no one can predict just yet. The objective is clear though: Making the literature accessible to all. To get there, you have to first allow as much in as possible. Then filter after, either through algorithms (e.g. faceted search or collaborative filtering), crowd-sourcing, or word of mouth. And it has certainly been shown that human reviewers do not effectively filter out/in the right stuff as it stands in traditional journals. So, the argument that subscription publishing provides quality filtering doesn’t hold. And then there is the case for negative results, which are valuable, but by and large are only allowed in with an Author Pays OA publishing model.

Oh, good — more rhetoric. First, personifying us as “stodgy” and “old” with the inference that we’re batty and contrary, then praising the model you believe in without providing evidence about any assertion.

For instance, “a business model that no one can predict just yet”? Really? I can predict it won’t be based on individual subscriptions. I can predict it won’t be able to ever have a journal with a 5% acceptance rate and a sustainable price to authors. I can predict it won’t ever be the dominant business model for high-end journals. I can predict that advertising won’t play a significant revenue role in the near-term, and possibly in the long-term. I can predict print won’t have a significant role. I can predict that value-added filtering services will emerge, and will deliver more value than OA publishing itself.

To make the literature accessible to all, you have to allow as much in as possible? That’s nonsense. Most subscription publishers make their content available to everyone after six months. Many are publishing more selectively than ever.

You’re saying that publication isn’t a filtering event? Or not much of one? And that filtering should occur afterwards? That merely shows that you’re serving authors, not readers. That’s where there’s a fundamental disagreement — I believe publishers should serve readers first, authors second. OA serves authors first, and you show an absolute disdain for readers, actually saying they should work to figure out which things you published are any good (crowd-sourcing, collaborative filtering).

As for human reviewers effectively filtering out the right stuff, their role is to find the right place for a study. See Phil Davis’ post from earlier this week. Your model still depends on the same model of human reviewers, but without the helpful branded hierarchy that’s evolved over decades, so there’s no improvement in process and a potential deprecation of utility.

Finally, as for negative results, this old canard fails to acknowledge that most high-quality journals DO publish negative results. I’ve been through many months and years during which it seemed that all we published were negative results. It’s just that since good journals seek out high-quality, novel studies and not just “dropped the test tube, let’s write it up” negative results, these negative results studies are big deals, so they don’t quite register as negative results in the low-level, who cares kind of way. They are important negative results.

If having thought these matters through, based my opinions on facts and experience, created a logical framework that favors consumers of the literature over suppliers of the literature, and factored in the rewards system that exists and is likely to continue for decades makes me old and stodgy; and if having illogical, self-contradictory, misinformed, and out-of-touch opinions that ultimately resolve into trite rhetoric makes you young and free-thinking, so be it.

Hello Jason,

This stodgy old Emeritus Professor once worked for an OA publisher…

Making “The literature available to all” has to absolutely encompass all of the literature. All of it. Not just Physics or the sexy bits of Biology. It has to encompass all the other ology’s and the other outputs of scholarly learning.

One view of OA posits the hypothesis that Author charges will enable all of the literature to be made available to all in a more efficient manner than the current system. Implicit, but not usually stated, is that these author charges will support the ongoing maintenance of the the corpora of literature. presumably, those same funds will also allow for ongoing development work in response to the evolving needs of the consumers of the information.

My understanding of those other areas is incomplete, but I have heard consistent comments from those in the know about the ability to be able to fund research dissemination on an author pays model from areas of study that do not receive anything like the levels of funding of (for want of a better generalisation) the ‘Big Sciences’. The humanities and the social sciences appear to be particularly disadvantaged here, and there are many areas of biology where grant money is in short supply.

My point is to say that revolutions change the world. But one does need to be aware of what the collateral damage might be. Does OA for example lead to an enrichment of scholarly information across ALL fields of study when compared to the current levels of information dissemination. If it does not, is the loss outweighed by the benefits it brings?

To me, bluntly, OA is simply a reallocation of the flows of money. I’m not convinced that this reallocation leads to a net benefit for scholarly research but I am most certainly open to persuasion. Furthermore I keep I eyes and ears open for developments that would help to inform me.

You state that It has been shown that human reviewers do not effectively filter stuff. Implicit in that statement is that idea that machine algorithms can do it better. I’ll call you on that. I’d like you to support your argument there. I want to see a study that compares human vs machine filtration across a range of scholarly fields and shows a significant improvement in machine based filtration over human.

A good comment on using algorithms to filter from Cory Doctorow:
“There’s this kind of weird, big lie about how an algorithm is not a form of editorial control. Google will say ‘we have organic search results’ in contrast with what Alta Vista used to do, where they would take payment to put a result first. It’s ‘organic’ because it’s done with math, but actually it’s editorial by another name. All the companies that do editorial by algorithm claim that there’s something about math that makes it free of bias and will.”

@David Smith – I like your stodgy, old emeritus professor style. 😉

As for references to human reviewer quality, I am unaware of a human vs. machine study, not to say they haven’t been reported. All I can remember and advise any comment readers to lookup are human-only studies looking at retractions, etc. If the readers know of any human vs. machine studies and have the time to do so, perhaps they can post them here.

To be clear, faceted search, collaborative filtering, etc have a long way to go and I make no claims that they are perfect. I do make a claim that they (or tools similar to them) are the way forward to properly filtering the deluge of literature that is only growing in size each year.

@David Crotty – For sure there is an editorial bias in even the most “pure” algorithms. I’ve stated this in public, in fact at last year’s Science Online London 2010 in a talk about academic recommendation engines. There’s no way around that editorial bias, so everything must be taken with a grain of salt. Researchers still have a duty IMO, to do some actually “research.”

OA is definitely a reallocation of resources. Since Kent alludes to me being an OA fanatic, for the record, I am most definitely not. I am, however, an advancement of science advocate. The current models being explored by OA publishers are not what they will be in 10 or 20 years. So, when that day comes I will embrace it; and if current subscription journals can find a better way to disseminate, then I am all for that as well. To me, it’s not an “Us versus you” mentality.


Thanks Jason. Doctorow makes an important point about Google, Facebook and the like, about how the results they give us are based on what makes the most money for those companies rather than the most accurate answer to the questions we’re asking. Though editorial bias certainly exists, one strong point about our current journal system is that the bias is distributed among a vast number of editorial offices and publishers. A switch to one algorithmic method or one mega-journal would create a monopoly situation where authors had no choice but to play to that single bias, rather than having a mixed market with multiple chances to find a receptive outlet.

Wow, you compare PLoS to ‘factories spewing pollution into rivers’ and then complain in the comments about other people’s use of rhetoric!

And how about an apology to Mr Guttenplan? Your main piece of evidence that his ‘true intent wasn’t objective journalism’ – i.e., your incredulity that Elsevier wouldn’t give him a quote – has been blown out the water.

Oh and ‘I can predict it won’t be able to ever have a journal with a 5% acceptance rate and a sustainable price to authors.’ BMJ has a 7% acceptance rate and is free to authors. It’s not a business model that will work for all journals, but then few journals have acceptance rates round the 5% level.

In writing my alternate versions, I tried to adopt the level of rhetoric one might find in a newspaper expose. In essence, I was writing “in character.” You can debate how well I did that — was I chewing the scenery? — but that’s the intended effect. I personally don’t feel too far from that point, however, especially when essentially the person quoted in the Guttenplan article said pretty much that — we publish unimportant, redundant works, and let others sort it out later, all while taking a fee for it. So while the rhetoric was pitched at the “news expose” level, the idea is one I don’t flinch from, especially given the implications of what Dr. Patterson is quoted as having said.

I have no intent of apologizing to Mr. Guttenplan. If Elsevier wasn’t willing to provide a quote, there were other publishers he could have spoken with, and he certainly could have done enough homework to know that certain people he quoted from 2006 are no longer with the organizations they were with 5 years ago, for instance. When a journalist is actively seeking a range of opinions from one side and not as careful or energetic about getting current quotes from the other, I sense a lack of objectivity and professionalism. Look at the comments here. We have so many people here he could have called who are willing to talk and who have a ton of smart things to say, but he failed. Why? Was his agenda satisfied with what he had? I think he was trying to generate a narrative.

Now, to your mind-boggling last point. The “price to authors” in the subscription model has for the most part been zero. I’ve never worked for a journal that charged authors. Some charge color and image charges in order to prevent authors from driving up their costs through vanity publishing practices (it had better be important, the price suggests). I was clearly referring to open access (aka, author-pays) journals. To compare that to the BMJ, which is a subscription journal, is a whiplash-inducing frameshift that makes no logical sense.

Er, all research articles in BMJ are open access. They make no charge to authors and reject 93% of papers. You predict that there can never be an open access journal that rejects 95% of papers. I’m saying there already is one.

Of course, they fund the OA research papers with a subscription news and views section. Not all journals can do that. But some of the very rare 95% rejection rate journals could. I’ve often wondered what would happen to Nature and Science if they went down that route. (Out of interest, I wonder how many 95% rejection journals there are that don’t have significant editorial features.)

Well, yes, BMJ has been toying with free online for years, and fine-tuning the role of research content in their pages as well. In a decade, the number of research articles published in the BMJ decreased from more than 900 to less than 300. They have ratcheted down their research publishing for a variety of reasons, but primarily because publishing original research became much less directly important to their business model, and making it free online shifted it into being nicely indirectly important. And this illustrates the difference between free access and open access, in my mind. Free access is a publisher deciding to allow content sampling to drive subscription business, which is what BMJ is doing with their research content. BMJ is basically executing a content sampling strategy, with their research content acting as the bait to drive online subscriptions to their other content, products, and services. They may call it “open access,” but it’s really more of a content sampling strategy. As you note, it’s cross-subsidized by other revenues, so is probably viewed internally by some as a marketing expense (opportunity cost). BMJ has historically funded its journal through portfolio (non-journal-editorial) revenues, so this wasn’t much of a stretch for them.

Journals are wildly different from one another, so a fundamental flaw of the OA argument is that it will dominate or take over. There is no reason to believe this. No model dominates now except site licensing, but even within site licenses, there is a forest of variety to be had. Even BMJ has executed open access in a way few advocates could have predicted, and for reasons all their own.

Readers who are interested in assessing the value of OA journal articles would also benefit from reading several excellent articles by Jeffrey Beall in the Charleston Advisor, which raised various critical points from a small number of OA publishers:

“Bentham Open,” The Charleston Advisor, July 2009, pp. 29-32
“Predatory Open-Access Scholarly Publishers,” April 2010, pp. 10-17
“Update: Predatory Open-Access Scholarly Publisher,” July 2010, p. 50.

There are a few logical errors in Kent Anderson’s post that seem quite persistent.
1) The notion that OA equates to lower standards. There are, in OA publishing as well as in ‘traditional’ publishing, widely diverging quality standards (if we indeed know what ‘quality’ means in this context). It’s got nothing to do with OA per se. There are plenty of low quality traditional journals, although I think that even those have a function as the hard core rubble in the foundations of the science literature edifice.
2) The notion that having unlimited space, like PLoSOne has, is a consequence of OA. It is a consequence of e-only, not of OA per se.
3) The notion that most journals (their editors) know in advance what is important, or even care if it’s new, and if they do, that that is important. Very often, important results are only proving to be so after a (long) while. And focussing on newness and positive results is a problem, as it all but eliminates important information, such as confirmatory and negative results from the scientific literature, and thus irresponsibly ignored. At least OA publishing offers the possibility that those results will be published, too.

The biggest problem I see with science publishing is that it, OA and traditional publishing, loads the entire cost on the published articles, whereas much of the work is to do with the acceptance/rejection process (even at PLoSOne). A much fairer system would be to charge at least a portion of the total costs on submissions, in the way that one pays for, say, a driving test, whether or not one passes. The current publishing systems, OA and subscription-based, militate against such a system, unfortunately.

I assert that financially viable OA equates to lower standards because the only model that seems to work financially for OA is bulk publishing, as a high volume of author fees seems necessary to defray the costs of the work. PLoS ONE shifted from “scientifically sound” to “methodologically sound” precisely to support this practice. Other OA publishers have similar high-volume practices. In fact, many appeals from OA journals come in the form of solicitations promising no delay, assured publication, and fast publication. They are clearly only after author fees, not filtering for readers. That’s a completely different — and lower, I believe — standard.

Having unlimited space is enabled by technology, but while there is now unlimited space available to every journal, many high-quality journals have actually lowered their acceptance rates. So obviously it’s not technology alone that is driving bulk publishing practices, but something else. See prior paragraph.

Aspiring to publish the most important, scientifically sound, and relevant work is an imperfect practice, but a better practice, I believe. Sure, once in a while, a paper may slip through, but these are clearly the exception. Novel findings are more interesting than redundant findings, and the assertion that negative findings aren’t published in high-quality journals is rubbish (they’re published all the time). OA journals are likely publishing the redundant negative findings. Yawn.

OA loads the entire cost of publishing onto the published paper, because those journals only get paid when a paper is accepted. THAT’S THE PROBLEM WITH OA! There is an inherent conflict of interest for editors, reviewers, and publishers — I need to publish more papers to be viable. Subscription journals can price themselves so that the cost of rejection is covered, spreading those costs across readers and users so that the editors and publishers have built-in incentives to keep READERS, which is why that form of publishing is more valid in my mind. It’s focused on the user/reader, not on the supplier/author.

Cascading peer-review is one way that subscription journals are coping with decreasing subscription prices and institutional pricing pressures (some of which have been created by OA funds siphoning money away to sit unused in shadow accounts) — that is, spread the cost of rejection through a family of journals. Because this is effectively raising the acceptance rate for that review process — not for any particular journal, but for a particular journal’s review process — even subscription journals now are becoming bulk publishers. They’ve caught the same cold. And that’s probably an unintended consequence librarians, OA advocates, and others didn’t see coming.

Can’t agree with your first point here Kent–the assumption that OA automatically means PLoS ONE (or the equivalent) is as misguided as assuming all publishers resemble Elsevier. Plenty of publishers are doing different things with OA, many of them financially sustainable and rewarding, including selective OA journals and highly selective hybrid journals.

Yes, but those examples are cross-subsidized. To me, “financially viable” OA is OA that is self-sustaining. I haven’t seen a non-bulk model for this. Selective OA and hybrid are basically cross-subsidizing from subscriptions.

By “selective OA” I mean journals that are fully OA, but reject more papers than they accept. There are some that are financially viable, unlike the non-PLoS ONE journals from PLoS.

Not sure why the business model has to be purely author fees to qualify as viable. OA can be a valuable part of a multidimensional strategy. I don’t know of any subscription-access journal that relies solely on subscription revenue, why must OA be held to a different standard?

As for examples, the BioMedCentral journals turn a double-digit profit margin (their quality varies journal to journal, their “Genome Biology” has an Impact Factor just under 7). OUP’s “Nucleic Acids Research” is fully OA and in the black, has a rejection rate around 70% and an Impact Factor close to 8. The journal does sell print subscriptions, but would still be viable without that additional income.

Cross-subsidization comes in a lot of forms. For instance, at OUP, I’m betting that the costs for HR, IT, legal, and administration aren’t put against NAR. Even at that, an acceptance rate of 30% is kind of high. But if they’re profitable with all their overheads allocated properly, good for them. NAR charges institutions fees to garner discounts, etc., it should also be noted.

Most subscription journals and paid publications subsidize their owners and their other lines of business, not the other way around. That’s worth noting.

No model is “pure,” so to speak — PLoS has institutions paying, grants, etc., while many subscription journals have page charges and figure charges, others have advertising, and so forth. BMC has a huge list of journals, and charges subscription fees for non-research content in many cases. But if none is pure, then some are better to watch because they’re closest to pure. PLoS is closest to a pure OA/author-pays model because their offsets are so small. If they were ever bought by a major international publisher, their level of cross-subsidization at the infrastructure level would become significant. OUP is too big and diverse, BMC has Springer now to defray a lot of overheads, etc.

I think ultimately you and I are in violent agreement that business model shouldn’t enter into the picture, but when one camp is asserting that there is only one legitimate business model, that it’s superior morally and factually, and that the destiny of all publications is to adopt that model, I feel the need to point at the place where that model is in the most extreme use and call out the inherent problems with it when it’s not part of a more diverse, informed, and balanced approach. OA has a spectrum of implementations, but the potential problems of going “big” with it are clearest at the edges.

The real takeaway is that there is very little OA that is “pure OA.” Most of it is mixed up with some budgetary and financial ecosystem that makes it viable. PLoS is the least mixed, and therefore the easiest to tie to OA claims of viability, superiority, and validity. But in the world beyond PLoS, which is large, OA isn’t really able to thrive without these financial friends and alliances. To me, that’s a really important point advocates tend to gloss over.

I do understand that you’re pushing the logic of an argument to an extreme to make a clearer point. But if I’m going to call out Monbiot for implying that all publishers are Elsevier, then I also have to call out an argument that all OA is PLoS ONE.

The question of overhead is an important one, and in many ways one that favors the big commercial publishers over the not-for-profits as far as implementing OA, perhaps an unintended consequence here. Working on a blog post to clarify some thoughts on this, look for it later this week.

Re OA: “There is an inherent conflict of interest for editors, reviewers, and publishers — I need to publish more papers to be viable.” But there is an equally inherent conflict of interest in non-OA publishing: I need to publish FEWER papers to be viable. You may say that it’s quality that is driving the effort to limit the numbers but that’s certainly not why it was in the past (pre-online) or the present. Because editing, publishing, printing, hosting, etc. all cost money more or less in direct relation to the quantity, it is more profitable to publish fewer papers. If not for the requirement to actually produce results and to maintain a reputation, this would naturally tend to zero. But the same is true for OA: if not for the requirement to actually produce a non-infinite resource and to maintain a reputation, they would tend to accepting everything. But they don’t. And you don’t publish nothing. Both systems have a bias in terms of what is selecting. The OA advocates argue that it’s better to publish something than not to, and you argue the reverse. There’s nothing good or bad in a 95% acceptance rate just as there’s nothing good or bad in a 5% acceptance rate. The proof is in the resulting product.

A few points in response to Kent Anderson’s comments of today:

Kent says that “BMC has Springer now to defray a lot of overheads”. True, of course. But this is also true: *any* journal that Springer publishes has Springer to defray a lot of overheads. That’s the nature of an economy of scale. Nothing particularly peculiar to OA.

Kent talks about “cross-subsidization” as if it is something dirty. It is the essence of the portfolio approach that all publishers worth their salt take. Some OA journals will do well, some not so well; some subscription journals will do well, some not so well. Cross-subsidy is the name of the game. Nothing particularly peculiar to OA.

Kent also says that “one camp is asserting that there is only one legitimate business model”. He clearly refers to the OA ‘camp’. Yet if there is any camp that asserts there is only one legitimate business model, it is his camp, and it is subscriptions. OA is not a business model. It needs a business model to sustain itself, of course, and given that in the scientific ego-system ‘publish-or-perish’ is the prevailing adage, and not ‘read-or-rot’, it makes economical sense to place the financial burden of sustaining publishing on the shoulders of those who benefit the most from the system, i.e. the authors. In reality, it is of course the same funders who pay the ‘author-side’ fees as that pay the ‘reader-side’ fees, in the end. The former money flow may be more direct; the latter goes via overhead charges that institutions levy on grants, from which the library subscriptions are subsequently paid. That publishers ultimately get paid out of funding streams is nothing particularly peculiar to OA.

Kent says that “OA loads the entire cost of publishing onto the published paper, because those journals only get paid when a paper is accepted.” That’s true. And in my view a flaw of the system. But it applies to subscription journals as well. Subscription journals get paid only on the basis of what they publish. All the major publishers (and I suspect most of the minor ones) have for decades marketed their journals to prospective authors with a view to get more papers in order to increase the published volume and with that the subscription charges. Nothing particularly peculiar to OA.

The take away is that Kent Anderson has some kind of OA-allergy, seemingly on principle, and no amount of reasoning or even experimentation with business models that aim to provide, via open access, wider dissemination of research results and a more efficient scientific information flow, will cure that allergy. If one doesn’t accept that open and free availability of scientific information, a free ‘noösphere’ if you wish, is something to strive for, but instead one takes the view that the subscription model is sacrosanct (the ‘licence-sphere’, or ‘L-sphere’), then a discussion on how to achieve open access is futile.

I think there’s an inherent tilt toward authors and publication at a time when information overload is a serious problem. OA solves its financial problems basically by publishing more. Subscription journals have typically solved their financial problems by becoming more interesting to more readers. That basic difference — plus the heated arguments and righteousness of the OA advocates — makes me skeptical and watchful.

All the major publishers (and I suspect most of the minor ones) have for decades marketed their journals to prospective authors with a view to get more papers in order to increase the published volume and with that the subscription charges.

This has not been my experience. Where I’ve worked, journal editorial offices are urged to continuously raise the bar, to accept fewer papers rather than more. The aim is more focused on increasing quality rather than quantity. Page budgets are strictly followed and the idea of massively increasing size (or frequency) of a journal is not taken lightly, nor regularly encouraged.

Ken, I really have no idea what you mean by sketches. Most research articles are pretty short and to the point. If PLoS One type mega-journal are a venue for getting new innovative research disseminated quickly and efficiently with a narrowly focused technical review that is an important and useful function. That is my whole point there are a variety of scholarly communication needs and different types of journals can serve different functions and too little of the discussion is on clearly articulating the communication needs and how best to configure journals or journal replacements to best fulfill the needs.

As for pharma research, that has always been published in journals and should, its important research. There is a needs for controls to minimize the inherent conflicts of interest and that is one of the areas PLoS One review focuses on. Not to sound catty that is not an area I would necessary be highlighting if you are promoting traditional publishing models. (http://classic.the-scientist.com/blog/display/55679/)

Yes, I’m well-aware of the Elsevier scandal, which we blogged about when it occurred. As for PLoS ONE focusing on inherent conflicts of interest, allowing a pharma company to have a hand in study design, data outputs, and the like isn’t handling COI, it’s allowing it. Elsevier should be ashamed for participating in publishing fake journals. PLoS ONE should be ashamed for allowing pharma to publish articles without adequate COI controls in place. Both are motivated by the same underlying motivators — cash for conflicted information. Glad you pointed out that linkage, but you praise PLoS ONE despite its lax practices. I don’t think that’s right.

As for “sketches,” I meant to portray that I think most of the studies are the simpler, more rudimentary, and less worth the trouble to get published studies that scientists do all the time, and now they have a way to get them published quickly and easily. Fine. But when they later get a larger, better study published elsewhere, they probably will cite the sketch, so there you have it. Interviews I’ve read with scientists proclaiming the virtues of PLoS ONE basically tell this tale — it’s fast, affordable, and lets me get on with my bigger research projects without the delays and trouble of traditional publication for minor results sets. It’s OK as far as I’m concerned, but we should be clear what is going on. Impact factors don’t get to the underlying dynamics of citation. I think PLoS ONE’s impact factor is higher than expected because papers there are sketches of later, more finished works. Or, in the case of companies, potentially they’re the first step in a publication program.

I stand by my statement that there is a huge difference between what Elsevier did with the fake journals and PLoS publishing pharma designed research. What Elsevier did was underhanded and deceitful. You can question the appropriateness of PLoS’s conflict of interest policy but they did not try to deceive anyone.

Where is the data backing up your assertions about the rate of author self-citing or that PLoS One mainly publishes preliminary reports of research studies? Of course impact is a crude measure of citing patterns but again you provide no evidence that PLoS One’s citing pattern is different than other journals.

I took a quick look at the author self-citation literature and all of it I could find is based on 5 or 10 years’ worth of data. Even then the high end of the self-citing rate appears to be about 20%. If you limited it to two years’ worth of data, self-citing would be lower. If it were the case that the article was citing an earlier preliminary study it would probably be much lower given the time it takes to do additional research, write it up and get it published in a “high impact” journal after a typical couple of rejections.

There is a huge difference between what Elsevier did and publishing pharma research with the company participating in the study. Conflicts of interest of all sorts exist in research and you can make a pretty valid claim all researchers have an inherent conflict of interest. Successful studies bring fame and fortune negative ones do not.

Generally conflicts of interest are dealt with by being disclosed. At some point that is not adequate but it is debatable at what point conflicts need to be address by other means. I feel pharma participating in published research is fine as long as it is disclosed. In properly written research reports the methodology and results are adequately described and readers can make their own conclusions about the usefulness of the results including dismissing it out of hand.

Traditional subscription publishing has inherent conflicts of interest too. They made high profits off of reprints and if a paper on a trial shows a new drug A is better than the standard treatment B you can bet the company that owns the drug A is going to be buying truckloads of reprints to stick in the hands of every physician they can find. It doesn’t necessarily mean the journal is more likely to publish such a trial because of the profit they make but it’s a conflict of interest.

I’m with you, David. Conflicts of interest cannot be avoided and are found all over in the scientific ‘ego-system’, so disclosure of those potential conflicts of interest is the key. PLoS One (and other PLoS titles, as well as BMC titles) score very well in that department. Certainly not less well than highly reputable subscription journals.

Allowing a pharma company to have input into study design, data presentation, etc., is quite problematic. The ICMJE states, “Biases potentially introduced when sponsors are directly involved in research are analogous to methodological biases.”

Let’s just revisit that disclosure again: “Pfizer had a role in study design, data collection and analysis, decision to publish, and preparation of the manuscript.” Essentially, Pfizer played a role in designing the study (a role not elaborated upon in the Methods, as far as I can see from a quick scan), data collection and analysis (major opportunity for bias), decision to publish (wow!), and preparation of the manuscript (what was written). And you think disclosure is sufficient to make this a reliable contribution to the literature? And you fail to see the tie to publishing selected articles for a corporate sponsor? I’m actually not sure which is potentially worse, but I think turning a blind eye to either and saying “PLoS ONE got disclosure, so good for them” is weak tea.

Kent, you seem to say that pharmaceutical companies (and possibly, by extension, any commercial company) should not engage in serious scientific research and if they do, either by themselves or in collaboration with academic research departments, that they shouldn’t be allowed to publish it in academic journals. If that’s not what you mean, and pharmaceutical companies are allowed to engage in serious scientific research, either by themselves or in collaboration with academic research departments, and publish the results, how could they possibly avoid having “a role in study design, data collection and analysis, decision to publish [which, obviously, means the decision to make public, to submit for publication; not the decision to accept for publication in a given journal*], and preparation of the manuscript”?

If they disclose having a role, the reader can – if he or she so chooses – take their judgement regarding potential biases into account. No journal can – ever – guarantee that there are no hidden misrepresentations or biases in any paper they publish, however good their peer-review processes are. A quick look at http://retractionwatch.wordpress.com will underscore this point. To arrogate that journals do protect the reader against any misrepresentation or bias in the papers they publish is highly disingenuous and condescending to boot.

*When an author ‘decides to publish’ it never means, and never has meant, that he or she decides to accept for publication. That is the prerogative of a journal’s editors; not of the author. In reputable OA author-side payment publishing as true as in other reputable publishing business models. Journals, OA or subscription, that have different policies on this become quickly known as disreputable and are committing academic credibiliticide. Correcting mechanisms are ultimately very strong in serious science.

In the study some of the authors work for Pfizer. So, perhaps it is no great surprise that Pfizer had a hand in designing the study, collecting the data, deciding whether to publish.

But is this an open access issue, or an PLoS issue? Let’s do a quick search of PubMed for ‘Pfizer’. How about ‘Smoking, smoking cessation and smoking relapse patterns: a web-based survey of current and former smokers in the US, published by Wiley in the subscription based International Journal of Clinical Practice. (http://onlinelibrary.wiley.com/doi/10.1111/j.1742-1241.2011.02758.x/full – you may need a subscription to see the full text)

All the authors work for Pfizer or a company that was paid by Pfizer for the research. The acknowledgement says:

“This study was funded by Pfizer Inc. The study sponsor was involved in the study design, data analysis, data interpretation and the writing of the article.”

Whether or not this type of bevaviour is acceptable is one thing and I’m sure it could be an interesting debate. But please, let’s not try to pretend that it is a particular issue surrounding OA.

I’m not pretending that business model is the defining factor but OA advocates praise OA as being ethically and functionally superior while I disagree. Actions speak louder than words. Not all journals have feet of clay in this area, finding another with feet of clay isn’t a spectacular find, and you’re confirming that PLoS ONE has feet of clay. The other interpretation of your perspective is that OA itself creates no improvement over the reporting of biased results, but gets them out there faster. My extension of that is that because it has an author-pays model, it’s also less likely to be able to raise bulwarks against bias. Time will tell. So, we truly just have a different business model, one that doesn’t offer any improvement on the quality of reports. Gee, glad we worked so hard to get it. And now it’s also as profitable as traditional publishers. AND now everyone can access the biased results. So we’ve reinvented the wheel, but still haven’t invented the brakes.

OA was originally championed by librarians because of the pressures exerted by the “serials crisis” on their budgets, the idea presumably being that OA would relieve that pressure–as, indeed, it would except for those versions that rely on libraries coming up with membership money that allows a discounted rate on author fees for their faculty. But the irony here is that, as usual, publishers outside the university–whether commercial or non-profit like PLoS–are finding a way to make this a very profitable business–again. The result is not, therefore, cost savings to the university overall, but only to the library. Peter Suber agrees about this “cost-shifting,” but thinks the cost will be lower overall; I’m not so sure, given the nice fat profit margins that PLoS One is now racking up. I could be more sanguine about this state of affairs if I thought that the cost-shifting would free up library funds to buy more scholarly books. But no librarian I have talked with thinks that will happen; indeed, the rage now is all for patron-driven acquisitions, which may well decrease sales to libraries further. So, whose interest is OA ultimately serving? Some more readers not affiliated with universities will gain access to more literature, and that is a good thing. But it strikes me that the shift to OA will do little or nothing to help alleviate the economic pressures on universities, and that most of the money for publishing will continue to be sucked out of universities for the benefit of other businesses.

Sandy says “…it strikes me that the shift to OA will do little or nothing to help alleviate the economic pressures on universities, and that most of the money for publishing will continue to be sucked out of universities for the benefit of other businesses.”

It is a widespread and common misconception, in Academia as well as in publishing circles, that what is being paid for is ‘publishing’. I don’t think it is. Publishing (as in ‘making public’) is actually exceedingly cheap. It can be done on the web by anyone at insignificant cost.

What is being paid for is what might be called ‘status enhancement’. The status of individual researchers, of research departments, of universities, even of whole countries. Publishing is used – hijacked? – for that purpose. In order to work as a status enhancement mechanism, publishing has to be formal, with ‘quality’ proxies such as peer-review and citation metrics and Impact Factors, with ‘labels’ (journal titles) that indicate these ‘quality’ markers, with redundancy limitation rules (every article must be unique, ‘self-plagiarism’ is not even allowed), etc. The providers of these services – the ‘hijackers’? – call themselves publishers (whether OA or non-OA), of course, but they are in the employ of those in the ego-system who desire status enhancement. And they charge what they can get away with. That’s a market mechanism. They are just catering to what is expected of them. Academia drops the money on the proverbial street and publishers just pick it up. They are not in the business of alleviating the economic pressures on universities and most never pretended that they ever were (although OA publishing, in principle, introduces competition on price into the system which may indeed help with alleviating some economic pressures; although it is a solution that comes with its own problems, as many a solution does).

Those who have ethical or moral questions – or even just economical questions – about the cost of formal publishing and the profits made, should consider asking those questions as well in relation to the necessity of the desire for status enhancement in Academia. Maybe the importance of such status enhancement is worth the cost.

Although this is a little bit late to contribute to the debate: I have written (originally for a German publishing house) a booklet on the problems of Open Access financing. There is a English version available here: http://www.textkritik.de/digitalia/sudelblatt_englisch.pdf
Hope that helps to clarify the issues at stake, esp. why OA is NOT an instrument to save money, neither for the scientists nor for universities or governments.

Comments are closed.