English: Climbing the wall Burglar Bill I have...
English: Climbing the wall Burglar Bill I have seen a few more of his mates around Blewbury (Photo credit: Wikipedia)

Last week, BusinessWeek published a themed issue entitled, “The Year Ahead: 2014.” It’s a fascinating compilation of interviews, data, projections, and ideas. I will be reviewing my copy of this superbly useful print publication for weeks to come.

One article in particular caught my attention: “The Year of the Paywall.” In a single page, it neatly summarizes the problems facing newspaper, entertainment, and magazine publishers, while touching on problems facing scientific and scholarly publishers by extension.

The major premise? Publishers were demonized for having paywalls for individual subscribers, but now are finding that every alternative is either too unreliable or simply insufficient, and are returning to individual paywalls with a vengeance.

If you’ve toured many content sites over the past few years, you’ve seen this trend developing. Television and media companies have taken the lead. These companies know how to commercialize content already — a chart in the same issue of BusinessWeek breaks down your average cable bill by content provider. (You’d be surprised how much each of us is paying for ESPN, for example.) They are bringing this same savvy to the Internet. The BBC and various Viacom entities have it down to a science, imposing nation-level controls that make their sites feel like DVDs with regional encoding.

New paywalls and subscription approaches have started retraining users into a paying mode. It’s not an alien concept for any segment of the population — subscriptions dominate many other parts of life (cable, satellite, cell phones). So it’s not surprising that success stories are emerging:

The New York Times, for one, started charging online viewers of its content in March 2011 and now makes more money from readers than advertisers. It gets 53¢ from readers for every 47¢ it gets from marketers. That ratio used to be 80-20 in favor of advertising.

Why might the individual paywall become more of a factor in the future of scientific and scholarly publishing, even in this age dominated by institutional subscriptions and conversations and mandates about open access (OA) and open data? The answer is simple — there may be no viable alternative.

Institutional paywalls (i.e., site licenses) set our industry apart, but library budgets are not keeping pace with the production of scientific and scholarly information. A major trend driving a move to more individual paywalls in our industry is the long-term decline of library budgets relative to university budgets. While library budgets are still fairly robust, the gap between trained scientists, soft money from research funding, and failure to support paid access to research reports via library funding remains a fundamental market failure. That is, the market for research outputs is growing (both supply and demand), but universities and corporations, which benefit otherwise by the supply and demand, are unwilling to support it adequately. This problem seems intractable, so attention is turning back to the individual purchaser.

There are very few other places to look.

Advertising won’t save the day. Most academic publishers are small and don’t have a lot of advertising revenues to begin with. Even if they did, promotional spending is down across the board, for a variety of reasons. There is the macroeconomic problem of the Great Recession, which continues to drag on as US political gamesmanship and European austerity contribute to ongoing paralysis. There are particular market trends, such as the decline in pharmaceutical promotion driven by a lack of new therapeutic entities to promote. There is the increasing trend of companies catering to the stock market, which has many firms hoarding cash instead of investing in new initiatives. And there is the shift from print to digital, with punishing economics facing anyone rushing the transition. Digital advertising does not scale as well as online, and the price points are too low. In short, advertising and promotional spending in academic publishing provides some relief, but it is under pressure and facing problematic trends.

Reprints won’t come to the rescue. In addition to structural changes in the purchasing and distribution of reprints caused by the Internet, now there is the Sunshine Act and similar state-level and national initiatives intended to blunt the effects of drug and device company influence on physician prescribing habits. The unintended consequences are significant for many businesses, especially medical publishers. Pharmaceutical companies are now reluctant to distribute reprints, a change that has cut reprint business off at the knees in the US, taking away a major revenue offset that allowed other prices to remain low.

Suffice to say that nearly every revenue stream that publishers have depended upon to spare individuals from paying is soft or under pressure. Even Gold OA is not a solution, as the margins on these business models are low (with some exceptions, such as the mega-journal at PLOS, where high volumes are achieved) and many approaches eschew long-term content sales strategies (i.e., CC-BY doesn’t help you sell the content again later or into different markets). For smaller publishers, dabbling with Gold OA was possible because some of the other revenue streams were sufficient to offset the cost of doing so. As these dry up, they can no longer subsidize Gold OA experiments.

Finally, publishing businesses have never been more complicated to run, and complexity generates expenses. Social media doesn’t run itself. Publishers balancing both print and online have to run two businesses where one formerly existed. Investments in infrastructure are increasingly important, and sizable. Staff need new skills. Mandates drive policies, which are time-consuming and expensive to implement seamlessly. New business options take time to scale up, and have to be managed. Every publishing event is a multimedia event. The bottom line is that available revenues are spread across even more initiatives than ever, as audiences expect fulsome print, online, and integrated media experiences.

Currently, publishers are responding to these financial pressures with a certain degree of fatalism, which I’ve seen represented recently in major shakeups and reorganizations, the hiring of digital gurus who bring buzzword-laden philosophies and no audience-centric discipline, and attempts to do more of the same with such speed that some advantages are at least possible. There is an acceptance that digital is here to stay, but a grim realization that it might not be sufficiently robust to sustain the level of activity, cooperation, objectivity, and service we’ve become accustomed to providing as part of the academic, scholarly, and scientific community.

Ultimately, all these contortions only delay the inevitable. There are only so many sources of revenues for publishers. Either the suppliers pay (authors) or the consumers pay (readers). Proxies like institutions and advertisers are no longer going to be able to carry the load themselves. This leaves the direct approach — asking readers to pay, without pretense.

Will 2014 be the Year of the Paywall? Our industry runs on its own clock, so while the year may or may not be correct, the trends indicating a strong future for the individual paywall are hard to ignore.

Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.


71 Thoughts on "Will 2014 Be the Year of the Individual Paywall for Publishers?"

I’d be interested to hear what evidence you have that gold OA business models have low profit margins (with the exception of PLOS One). Given the number of publishers who are trying out this business model, it looks like many of them think it will make money, and some solely open access publishers appear to be making a profit already. And surely if the margins are low, putting up the APCs would solve that (at least as much as the individual paywall model, anyway)?

The evidence is everywhere, if you look. Publishers dabbling in OA admit that their subscription businesses are subsidizing their OA experiments. PLOS itself loses money on its prestige titles, making up for this and then some with PLOS ONE. Price pressures around OA demand low margins by definition. Raising APCs to sustainable levels routinely invites the anger and animosity of OA advocates, who feel OA has to have low margins and low prices.

It is interesting to note that the Gold OA model is a form of individual (or small group) paywall. Experiments with this are perhaps part of the move toward more paywalls — for publishing, for reading, for submitting, or for all three at once.

” Publishers dabbling in OA admit that their subscription businesses are subsidizing their OA experiments.”

If you are talking about Elsevier, Wiley or Taylor and Frances, sure, take a look at their fully OA journals. The vast majority were launched this year. Most of no or virtually no articles. Many have waived the APCs. So it is surprising they are losing money?

BMC, PLoS, Hindawi, Co-Action to name a few are all financially sound and in the black. The average APC of the articles published by these publishers is all well below $2000.

Again, you’re cherry-picking. BMC, PLOS, and Hindawi are not about “dabbling” in OA but are fully committed to all-OA models, and have stripped down their editorial handling accordingly. And, again, we’re not talking about the same things.

Maybe that’s the lesson: No dabbling, like Wiley or Elsevier, just go for it and become a committed Open Access publisher. Profits may follow.

So you are advocating garbage in garbage out. Not a publishing program but rather a libertarian rant.

And you are cherry picking what I said. I was referring to established OA publishers being quite financially sound while charging well under 2000 USD.

As far as publishers dabbling in OA:

“If you are talking about Elsevier, Wiley or Taylor and Frances, sure, take a look at their fully OA journals. The vast majority were launched this year. Most of no or virtually no articles. Many have waived the APCs. So it is surprising they are losing money?”

Here is PLOS One financials with a site.

The 2012 financial year represented a third consecutive year of sustainability for PLOS. Gross revenue grew 57% to $38.8 million (2011: $24.7 million), of which the increase in net assets was $7.15 million (2011: $3.95 million). PLOS’s expenses grew by 52% to $31.6 million (2011: $20.8 million), not least because of the increase in resources required to support the more than 26,000 articles published by the journals in 2012. This represents a 62% increase (2011: more than 16,000 articles); the total number of articles published by PLOS through 2012 was more than 68,000. PLOS continued to invest in infrastructure and innovation to transform and accelerate the publishing process, for example:


Did not make much money and with the growth it is experiencing I would think it will be back to break even unless it increases its fees or gets more money for publishing from institutions.


Making money and making enough money are two different things.

If memory serves, and please correct me, it seems that PLOS One makes money but the other PLOS journals are not. Will these others survive? Additionally, PLOS, I believe, still gets grants to subsidize its business model.

Kent, as well as paywalls I would be interested on your view as to whether or not you see a folding of many Green and Gold journals because of the low profit margins.

I think the fate of OA journals is less certain than many believe. There are rumors of some big closures coming. We will see. Everyone is competing over a shrinking pie, yet publishers have more mouths (authors and readers) to feed than ever. That’s the core problem.

Personally I have always preferred data to rumors. For example see


on the growth of the journals of some of the major publishers in OASPA. I am not sure where the shrinking pie is but it doesn’t seem to be impacting on these publishers.

Pete Binfield’s recent blog post


highlights the fact the mega journal market is continuing to grow exponentially even with PLOS ONE on track to publish around 31,000 articles this year.

My comment and your comment are not about the same things. I’m writing about rumors of some high-profile closures I’ve heard about, and how some OA journals are likely to close in the next few years. That doesn’t mean mega-journals. And, yes, everyone is competing over a shrinking pie. If you don’t think OA journals are dragging money from research budgets and library budgets, you’re fooling yourself. And these budgets are not increasing as quickly as the demands on them are growing.

And subscription journals are sucking money out of universities. Library budgets can’t keep up with the demand either. It is just a question of who is going to pay the bill.

On the other hand OA journals are sucking money out of authors and grants and libraries.

I don’t see academics paying for unsubsidized access to the work of other academics, especially where that work is in digital form. Academic writers have long adhered to a share and share alike tradition where a note to the lead author usually produces a copy of the requested article. This used to cost both time and the postage but now requires only an email and a short wait. As library subscriptions wane, look for more and more of this.
These schemes to preserve the status quo in scholarly publishing are but analogues to rearranging the deck chairs on a sinking ship. The fundamental problem of inefficiency is being ignored and, sadly, that’s where viable solutions lie.

Yes this is the custom. But, the article has to be published in order to be shared. Of course, those in the inner circle share prior to publication, but the rest of the world has to wait for publication.

Say an article author all of a sudden receives 200 requests, will s/he be so willing to stop what they are doing in order to share for free? After all time is money.

There is no free lunch not even for academics.

Speaking as an active researcher, I dream of the day when 200 people care enough about my work to write to me about it. Time spend responding to such responses would not be a distraction from work, it would be the work.

So this is very outdated thinking, this “especially in digital form” kind of magical thinking. Digital form is actually very expensive to accomplish in any robust and dependable and integrated way, cost money to maintain, and costs money to archive. Studies, direct experience, and projections all show the same thing.

“Inefficiency” is just a synonym for “magic.” Publishers are extremely efficient, but do a lot of things you don’t see. “The fundamental problem of magic is being ignored and, sadly, that’s where viable solutions lie.” Yep.

“Publishers are extremely efficient, but do a lot of things you don’t see.”

Why would customers care about services they can’t see?

Restaurant customers don’t get to see the cleanliness levels of the kitchen. Should they care that efforts are being made to attain hygiene standards? Would you be willing to pay to avoid salmonella?

Journal subscribers don’t “see” rigor and selectivity (as expressed by, for example, rejection rates and multiple rounds of review and editing that take place long before the article is made available for reading), but surely they care about them.

This is simple. (Like so many simple truths, it’s unwelcome, but that doesn’t make it not true.)

There are only three ways for a business to thrive financially.

First, it can be the first mover in a new market. But it seems far too late for any of the legacy publishers to do that: they show no signs of having the will or the ability to move quickly into new areas.

Second, it can be part of an inefficient market, i.e. a cartel, and so make money far in excess of the value it provides. I assume no-one (at least no-one who is not a publisher) wants academic publishers to do that.

This leaves the one way to thrive in a mature, efficient market: provide services that people want, and charge for them. Whatever services publishers’ customers need and can’t provide for themselves, publishers must provide.

In academic publishing, this means Gold OA. No, legacy publishers will not be able to make as much profit as they’re used to making. But from the perspective of everyone but the publishers, that’s not a bug, it’s a feature.

Mythology on display here, folks!

Established publishers were among the first to move into the Internet space with viable and well-used journals. HighWire Press was established to help non-profits fend off Elsevier, which was moving quickly online, as were other established commercial publishers. Established publishers using HighWire, Atypon, and other platform providers, and also building their own systems, moved quickly online, with most having a lot of experience and showing a lot of leadership well before the year 2000, when the notion of OA first started percolating. Technologically, publishers were actually very quick and adept at moving from print to online. What you’re saying has no basis in reality.

Market inefficiency is an interesting concept. What if a very expensive journal (list price) had a very low per-use price? Is that inefficient? No, it’s efficient. Who is measuring this efficiency of which you speak? You? Tell me about that, please. And what cartel is there, when 81% of the academic publishers are tiny (<$25 million in revenues)? These publishers are quite efficient, actually, with small staffs, outsourced technology, outsourced non-core business operations, and so forth. Again, what you're saying has no basis in reality.

Gold OA is a version of the paywall, but it is not the only version of a paywall that can work, nor is it necessarily the best. If you accept that authors paying for publication is not corrupt or indefensible, why is readers paying for content they want objectionable to you?

I sense double-standards within the mythology you're peddling.

Kent, don’t most journals already sell individual subscriptions? Are you suggesting that they now market these individual subscriptions more heavily? Else how is this transition supposed to occur?

Many journals generate most of their revenues now through institutional site licenses, which are under increasing pressure as more scientific outputs compete for slowly increasing funds, and as bundles consume most of the air in the room, so to speak. Individual sales are a small or negligible part of many businesses, and a lot of value is bundled into site licenses. Putting up pay walls for popular features or specific content sets may be a way forward. But the problem is that the current approach is challenged. And for many, the current approach relies on things other than individual subscriptions.

Your prediction has become reality for our publication in 2013 for many of the reasons you state – reprints/projects/Sunshine Act – all conspired to force us into a subscription model.

Recognizing that people don’t want to pay for the entire publication, and would prefer to pay for only that which they wish to consume, we moved our pay per view over to DeepDyve because they also employ a “rent” the article model.

We believe DeepDyve is not the new DivX, and those who want the content on a short term basis are willing to pay a small fee – again, like many of the streaming video providers you mention.

More options for the consumer.

Besides the multiples on acquisition are much better for recurring revenue publications.


You seem to have a fundamental misapprehension about markets here. I don’t think the failure of universities and corporations to buy as much as authors and publishers want to produce is a market failure. This is actually the market at work: they are buying as much as they are willing to pay for. The market failure is that authors and publishers continue to overproduce and complain that there aren’t enough buyers. in classical microeconomics, the supply should contract in the face of weak demand (or at least prices should fall rather than continue to levitate at multiples of CPI).

Constraining the market artificially can lead to misapprehensions about market behavior. The market is the academic and research market, in which universities and corporations participate to their benefit. For both, soft money for research and tax breaks both provide benefits. For universities, tuition and fees from undergraduate and graduate students provide benefits. Scientists are key to the funding of universities, yet universities are not willing to pay for the publications they demand of their researchers (publish or perish) or for the research their scientists need to flourish. They demonstrate this through underfunding of libraries, failures to support policies that would restore proper library funding, and cuts to departmental and professional development budgets. This is where the market failure comes in. Universities and corporations are willing to participate in one part of the market, but not in the other. They are not “buying as much as they are willing to pay for.” They are benefiting from one side of the equation, yet are not willing to balance the equation.

There are other market failures at work. Online advertising has been undervalued for a decade, when measured in ways analogous to how print advertising is measured. This is a market failure, as well.

You mistake your wishful thinking for a market obligation. The market is merciless and doesn’t care what you want.

Universities are training more PhDs than academia has room for. Is this a lack of a “market obligation” or something more deeply cynical and improper? Is the lack of support for the academics being trained simply merciless market behavior? Do we accept merciless market behavior in every aspect of life? Health care? Education? Senior care?

Or do we aspire for something more?

Should a lot of PhD programs be scaled back or closed down? Yes. But it is to the economic benefit of universities to take the money from the people who then furnish them with cheap, disposable labor. Markets (when not manipulated, as the Fed is doing with the bond market) are efficient, ruthless, and amoral. I don’t like them much either, but microeconomics does a pretty accurate job of describing life in a fallen world.

Everyone’s preferred solution is that somebody else should pay more for their pet project, whether it is supporting unread journals or creating more tenure lines for the adjuncts. Personally, I would like a villa in the south of France. I won’t hold my breath.

Why is the assumption made that Universities are trade schools. I was a Japanese Historian who found a wonderful career in publishing.

Scientists are needed to solve problems not just reside in academia.

You’re a fatalist about “market realities.” However, markets are inventions, and are managed by humans. Interest rates, prices, exchanges are all human inventions subject to human management. In the 1970s, research grants had to fund some library activity. This was a fair market balance — you fund research, you fund research libraries. President Reagan did away with this element of research funding, and library funding as a percentage of university funding has fallen every year since. That’s not “an efficient market” but a market failure — fund part of the activity but not the entire activity.

“Markets” don’t exist without humans. We create them, set their rules, and manage them. The academic research market is being mismanaged, and has been mismanaged for a few decades now. We have too many PhDs, underfunded libraries, and so forth. Fail.


I agree with your statements above with the assumption that we have too many PhDs. We may have too many for academia but not for the rest of human endeavors.

Kent I wonder. How much of the “new OA literature” is worth publishing or should I say reading. If the criteria is simply the method is sound lets publish it and get our $1500 regardless of contribution or quality why would a university or corporation support such offerings.

I think the traditional model is doing the same in some ways. The number of specialized journals proliferate, but are they needed?

Now everyone thinks their paper is ground breaking and important, but after being rejected by the upper tier and then secondary and tertiary tiers one would think the author would re evaluate the work but no. Instead the author takes out the check book, writes a check and wallah the article is published.

To think PLOS One has published 31,000 articles. Just how long did it take JACS to reach that number?

“If the criteria is simply the method is sound lets publish it and get our $1500 regardless of contribution or quality why would a university or corporation support such offerings.”

Because science depends on the accumulation of information, not just of sexy information. Every time a replication study is rejected from a journal for being insufficiently exciting, a kitten dies. Every time a failed replication is rejected (especially by the journal that published the the original, exciting result), a whole litter dies.

Sounds to me like you want to publish art, not science.

There is scholarly research done beyond “science”, which raises the interesting question of whether the megajournal approach is useful in other contexts like the humanities. Should one size fit all? But I agree that it’s valuable to publish everything valid. You can’t know with 100% certainty what will be important to every single researcher in the future. Put all the data out there, and likely most will be ignored, but at least it’s searchable and can be found if it turns out to be of interest.

Also, just to nitpick, a failed replication study tells one very little and is difficult to interpret. Researchers spend years, if not decades, perfecting skills and optimizing experimental conditions. If someone else can’t do the same techniques as well, or doesn’t have the patience to work out the conditions properly, that will offer the same results as a properly done experiment where the theory is wrong. How do you discern between the two? Now, an experiment that presents contradictory results, that’s something that’s more interesting…

I agree that a failed replication should be treated with the same scepticism as the original study (or indeed as a successful replication). But I am not quite so quick to dismiss them as you seem to be. If a specialist in field X tried to replicate a study in field X and can’t do so, then it at least means that the original publication’s methods weren’t detailed enough. A procedure that can’t be replicated is not useful, after all.

(Of course this is an area where the glam mags, with their strict length limits, tend to cut the crucial methods sections down to almost nothing, whereas other journals can typically do a much better job with the methods. It’s one of the reasons why I’m convinced that the existence of Science and Nature is a net negative for the progress of science.)

But even with skepticism, there’s still a point where you can’t definitively tell the difference between incompetence and incorrect results. A colleague worked as part of a group of 6 labs doing a complex 3 dimensional matrigel organ culture technique for their experiments. The 6 labs kept having trouble replicating the work of each other. It took them over a year to trace the problem to the collagen they were using. They were all using the same collagen from the same manufacturer, but were using different lots, and the variance between those lots was enough to alter the results generated.

Figuring that out took an enormous amount of skill, patience, effort and expense (likely worth it in the long run given the cancer research value of the method). I suspect that many doing replicative experiments are unlikely to make similar investments nor have the skills these top labs needed to figure this out. A failed replication does and should raise a red flag, but that red flag may be an indication of the replicator’s competence, rather than the original experiment’s validity.

And I’m not sure that I’d put all the blame on the lack of protocol detail in papers on the shoulders of Science and Nature. There are hordes of journals out there all along the spectrum that don’t require (and often don’t want) that level of experimental detail. It should be noted that Nature, among others, does encourage authors to put that level of detail into the supplemental materials. But one of the biggest problems is that there seems to be no additional career credit given for creating detailed protocols and sharing them with one’s colleagues. Why go to the extra effort and spend the time you could instead be spending doing the next experiments, which will count more?

I’d put the blame more on the brutal career structure and intense time demands of academia than on specific journals. I edited a biology methods journal for years, and getting people to formally write up their methods was often a bit like pulling teeth. These sorts of papers are seen as second rate, and as not counting as much as a data paper, so why spend time on it?

Interesting war story! It must have been frustrating for the labs involved, but of course a real advance in knowledge came out of it: the understanding of how sensitive the procedure is to minute differences in collagen. Hopefully that became the foundation of working out a new and more robust procedure.

Of course I didn’t mean that that this is entirely the fault of Science ‘n’ Nature. They are the most prominent offenders, but far from the only ones. The moral should be clear: no reputable journal should ever impose length limits on methods sections. (If that means they have to move them to online-only supplements, that’s not ideal but much better than nothing. But not all journals have great records at keeping OSI online and stable.)

“One of the biggest problems is that there seems to be no additional career credit given for creating detailed protocols and sharing them with one’s colleagues.”


“Why go to the extra effort and spend the time you could instead be spending doing the next experiments, which will count more?”

Well, because you’re trying to do actual science; not just advance your career.

I’m pretty sure we don’t disagree about any of this. It’s a weird feeling 🙂

Yeah, I think it’s one of those nice moments where we’re in line with one another.

One thing to consider is the way journals have been a driving force toward public access to research data in cases like GenBank. Much of the success of sequence databases and the like comes from journals demanding deposit as a requirement of publication. Could a similar policy, requiring detailed experimental protocols, allow journals to take a lead role in driving methodologies?

Well, because you’re trying to do actual science; not just advance your career.

A nice thought, but given the enormous glut of PhD researchers and the scarcity of funding and jobs, if you’re not intently focused on advancing your career, you likely won’t be doing science for very long, at least not in academia.

“Could a similar policy, requiring detailed experimental protocols, allow journals to take a lead role in driving methodologies?”

That would be extremely welcome: an unambiguous case of publishers using their power for good. What would it take to get something like this off the ground?

Mike I am sure you are aware of the protocols series published by both Wiley and Springer. They do just that which you request.

Harvey–Ahem! Cold Spring Harbor Protocols (http://www.cshprotocols.org).

Mike–it’s a good question. To me it’s not something journals can mandate on their own, and given the commercial nature of much academic publishing, it would put a journal at a competitive disadvantage if it made authors work harder than other journals that are at the same level. You need a solution that’s broad and goes across many journals and publishers.

Probably the best channels are research societies and research funders. Societies, such as the APA, are very concerned with reproducibility. They can serve as a driver for the community they represent and put together coalitions of journals to help them achieve their goals. Funding agencies could also mandate public availability of protocols as they have done with research papers and data (though like the data mandates, this may run into some IP issues).

While we are on the subject of method/protocol journals Nature published one as well. Nature Protocols (http://www.nature.com/nprot/index.html). But publishing a well formatted, edited and refereed protocols is as ‘expensive’ in time and resources as publishing primary research. Nature also has a free resource for sharing protocols: Protocol Exchange (http://www.nature.com/protocolexchange/). All authors on Nature titles are encouraged to use this resource but few do.

David: You hit on an important point in using the word valid. Just what does valid mean? Lets say we have method A and it is hard to replicate. I do an experiment and replicate it. I have done nothing more than carried out what has been done and because I have $1500 burning a hole in my pocket and send the paper off to an OA publisher and it is published. Is this something that is valid?

I’m not talking about redundant experiments, I’m talking more about incremental experiments. Much science is done around the tiny details around the big discoveries. One of my medical editors calls it “me too with an accent”, where he sees constant submissions showing variants of medical findings in different countries/environments. This work is novel, if only a tiny bit novel, and has value, if only a fairly small amount of value. But if you take hundreds of such tiny pieces and put them together, you could then have a significant and useful picture of that health phenomenon worldwide. Hence the PLOS ONE approach–if it’s accurate, put it out there, don’t worry if anyone cares or is going to read it. It’s now available in case anyone wants it, rather than sitting in a file cabinet where no one can ever know that it’s been done.

Good point and one with which I agree.

Now comes the next challenge and that is the indexing and making all these articles searchable. In short, a good parsing system that goes further than key words.

Right. The PLOS ONE approach relieves editors of the impossible task of guessing, on the basis of their interests, what articles are going to be of interest to me. Much better that it all be out there, and I make my own choices of what to read.

Mike: The mass of articles now puts you in the impossible position of guessing.

“Much better that it all be out there, and I make my own choices of what to read.”

… and let’s not forget that that’s what happens now anyway. It’s just that it happens by a really, really inefficient procedure of repeated submissions and peer-reviews at a sequence of venues until the authors roll a six.

Which is not to say that journals can’t help you make those choices more effective and efficient. Everyone is pressed for time, so the more filters available to help make the right call on what’s worth spending your time reading, the better.

“Which is not to say that journals can’t help you make those choices more effective and efficient.”

I agree that this seems like it should be true. All I can tell you is that it’s not, for me or for my colleagues who I’ve discussed this with. What journal a paper appears in is simply not a factor in whether or not to read something for us (beyond a preference for proper journals that have full-length articles over tabloids that have extended abstracts).

Of course, others say their own experience is different. (I know this from experience.) I imagine two factors that play into this are how long people have been in academia, with more recent arrivals less interested in journals; and what specific field people are in.

Anyway: if we publish everything that’s sound, then people who like journals to filter for them can have what they want and those of who like to make our own choices can also do so. So by all means retain “selective” journals so long as there’s a market for them. All I’m saying is that the addition of PLOS ONE-style journals to the ecosystem is an unequivocal good.

Perhaps it is a good and perhaps not. Only time will tell the end of that story. I tend to believe that it is a good thing.

However, I find that a journal that has to date published some 31,000 articles seems suspect as to quality and validity, but very profitable at least until now. I can only see costs going up to maintain the needed flow of papers. Or quality going down to maintain the costs of the paper flow.

Having published data bases I found them to be evil things. One must constantly add more and more data to justify them to either the user or the purchaser. I think the model behind the model for OA is similar to publishing data bases.

“I can only see costs going up to maintain the needed flow of papers.”

I thought conventional wisdom was that economies of scale allow unit costs to decrease as volume increases?

That is for widgets not for papers. The fixed costs will remain the same but the variable costs will increase. Especially the costs associated with technology and personnel.

“The recent publishing of fake articles challenges your assumptions.”

That is a matter of fraud, pure and simple. It has nothing to do with actual scientific journals such as PLOS ONE, and more than you can learn anything about subscription publishing from the fraud of the Australasian Journal of Join and Bone Medicine.


I’m very aware of the Bohannon sting, and have written about it more than once. As I said: the existence of fraud around the OA model tells you no more than the existence of fraud around the subscription model: i.e., nothing.

I think what it tells us is that fraud is possible. but if an EIC, Editorial Board and reviewers carry out due diligence the odds of fraud are diminished.

“Valid” is a key phrase here, and it varies by field. In some fields that don’t rely on human subjects, it’s complicated but not impossible to get a good sense of validity. In medicine and other fields that rely on human subjects, there are preconditions for “validity.” A good study may have not been filed correctly, failing the test of informed consent. Is it “valid”? Yes, but is it publishable? No. There is a precondition to validity, and that is an ethical and procedural precondition. So, not all valid results should be published in every field, as there are some aspects of scientific study design and conduct that supersede validity.

Again, painting with one broad brush continues to fail us, which is why any policy about publishing which sticks to one model or demands one approach is doomed.

This analysis is clear but leaves unanswered a fundamental contradiction: if the model is an “individual paywall” around an item of content this implies a level of knowledge of the product by the purchaser who is looking for a la carte options (specific content) rather than bulk subscriptions (domain coverage). For the purchaser who has that knowledge, the publisher’s services are much less useful for either quality control or discovery (to name two or the main functions traditionally held by publishers). A major part of being a research scholar in the contemporary environment is being better than anyone else (particularly publishers or even editors with wide portfolios to maintain) at doing those things within one’s sub-discipline. Anyone fluent in using the internet has basically disintermediated the core functions of the academic journal.

The publishing response has been not to double down on academic rigor (by requiring explicit methods for replication, for example), but to trade in prestige, impact factor and rejection rate to hold their residual administrative function of authenticating those at the top of the pyramid, regardless of quality. As that pyramid itself becomes budget-constrained, this strategy can only be a slow death, both financially and in knowledge. The publishers are basically in the strategic position of IBM missing the rise if personal computing in the 80s. They may not die but they will have to reinvent themselves.

At the researcher side, options for crowd funding publication costs in the community seem to hark back to the society model that once defined scholarly publishing.

Stimulating article, thanks!


The competent scientist should not have to read or use something only to find out that it was flawed. This is the chance one takes when using un-refereed material. Admittedly, some flawed research gets published even after review but it is rather small.

The user who seeks specific content only relies more on established reviewed literature because s/he is more likely to find well written, presented, and reliable information. This is why branding is so important.

The assumption that just because it is on the internet is one of the biggest flaws in your argument. It seems, that the material on the internet that is backed by a strong brand is more often used than that which is not.

Just google most any term and one gets hundreds if not thousands of hits – even in science – and of what use is that? For instance, I just googled homeobox gene and got 1.5 million hits. I may as well have gotten zero.

Your IBM analogy really does not apply. Additionally, IBM seems to be doing just fine.

Check out the latest Elsevier financials. They are actually giving back in cash money to stock holders.

Lastly, among the largest OA publishers are traditional publishers Elsevier, Wiley, T&F, and Springer.

The cost of entry into the publishing market is rather low the costs associated with distribution and attracting users is what costs.

Everytime flawed science is published a person could die! I believe we have had this discussion before. To publish that which has been published is a waste of everyone’s time. Science moves forward on new knowledge. A failed experiment, if it is a worthwhile one, is as important as a successful one because it allows clearer thinking.

Having been doing this for some 40 years I have not seen editorial boards ignore contributions but have seen them reject poor science.

The common description of the origin of university presses attributes their rise to market failure–i.e., there was no commercial market for the products of research coming out of universities that had adopted the model of German graduate school education. The longest continuously operating press, at Johns Hopkins, began by publishing journals in math and chemistry. Only later, after WWII, did there arise a genuine market for such research that proved to be commercially viable–for about 50 years. So, what is happening now is just a return to the state of affairs before Robert Maxwell arrived on the scene. There never has been a commercial market for most monographs produced in the academy. Thus, decrying market failure now seems like ignoring most of the history of scholarly publsihing, where market failure was the norm. So, is it any surprise that we’ve returned to the origins of the system? What is perhaps more surprising is that entrepreneurs like Maxwell managed to create a market for as long as they did.

One sees little evidence of the collapse of the STM market which is being discussed.

Perhaps those who see it can give some evidence.

Comments are closed.