HOME SICK
Keep it real (Photo credit: Rob Swatski)

As several speakers at SSP’s recent annual conference commented, Open Access is now a given. In the first six months of this year alone, we have seen a memorandum on OA from the Office of Science and Technology Policy (OSTP), a request for information from the Higher Education Funding Council of England (HEFCE), the introduction of the Research Councils UK (RCUK) mandate, a position statement from Science Europe, and an Action Plan towards Open Access to Publications from the Global Research Council (GRC). Like rock and roll, OA is here to stay but, as with rock and roll, it doesn’t always live up to its own hype.

One obvious example of this is the notion that, to quote Robert Kiley of the Wellcome Trust, speaking at a recent Copyright Clearance Center webinar, “Open Access is simple.” While it’s certainly true that the messaging around OA is simple (and has, therefore, been very effective), in reality OA can be quite complex for everyone involved – authors, their institutions, and publishers/societies alike. For one thing, moving from the subscription model of publishing, which has been around for centuries, to a new and completely different one, is not straightforward. In addition, different subject communities have different needs and finding a way to meet all their requirements is far from easy – it is certainly not a case of one size fits all. Likewise, mandate requirements vary from funder to funder, for example, in terms of embargo periods, licences, maximum Article Publication Charges (APCs), and so on. This will inevitably make the publication process more complex for authors, especially those working on international and/or multi-funder projects. None of this is to say that OA is a bad thing, only that it’s far from as simple as it’s claimed to be.

A new, but equally unsubstantiated, claim is contained in Science Europe’s recent position statement. Their member organizations say that “the hybrid model, as currently defined and implemented by publishers, is not a working and viable pathway to Open Access.” I’d love to see their evidence for this statement, since other funders (such as the GRC) don’t seem to feel the same way, and it’s certainly not been our experience. Submissions to Wiley’s hybrid journals (Online Open) have tripled in the last year, and from informal conversations with colleagues at other publishing companies, they are seeing a similar pattern. Hybrid journals are a sustainable way of enabling researchers to publish in their journal(s) of choice while complying with funder requirements to make their articles available OA immediately on publication. This seems to me to be a great example of publishers developing a new business model that meets the needs of researchers and their funders – a win for all of us. What has Science Europe got against the hybrid option – and why?

Perhaps it has something to do with another Science Europe statement – that publishers should “apply institutional-, regional-, or country-based reductions in journal subscriptions, in line with increases in author- or institution-pays contributions.” I confess I don’t understand the logic here; these are two completely different payments for two completely different services. An article publication charge (APC) is a fee paid to make one article freely available to everyone on publication; a journal subscription is a payment made to provide access to all articles in a journal (or collection of journals) for everyone at the subscribing institution. Asking for a rebate on the latter as a result of purchasing the former is like buying everyone a round of drinks in the pub and thinking that this entitles you to drink there for free whenever you like. Clearly, there are some legitimate concerns about “double dipping” (charging libraries for content where a fee has already been paid to make it available open access) which, to continue the analogy, would be like the bartender charging individual customers again for that round of drinks you bought. However, many publishers, including Wiley, have already developed or are developing solutions to this.

One of the most irritating claims made by funders is that short embargo periods won’t harm journals. We are frequently told that there is no evidence that shorter embargoes will lead to cancellations, but, as Phil Davis pointed out in his SK post on the topic, there has been no rigorous scientific study of the issue. Until there is, the only evidence I know of – the Association of Learned and Professional Society Publishers (ALPSP) survey of c200 librarians – indicates that short embargo periods are likely to lead to significant cancellations. We are clearly in an experimental phase at present, with a whole range of views on what is appropriate and sustainable, from the UK’s Medical Research Council, which is insisting on a six-month embargo, to the UK’s history community, which believes that 36 months is appropriate. The danger is that, by the time this particular real-time experiment provides us with the data needed to make an informed decision about embargo periods, it will be too late and some journals, especially in the social sciences and humanities, will have gone out of business – an unintended, unwanted, and unpleasant proof of the pudding . . .

So what’s the solution? Gathering more rigorous, scientific metrics – qualitative and quantitative – on OA publishing would be a good start, including monitoring the impact of changes to the traditional publishing model on all stakeholders and across all disciplines within the global scholarly community. Equally important is listening – and responding to – the needs and concerns of all stakeholders, positive and negative. I’m happy to say that this does seem to be happening more – see, for example, the recent announcement about CHORUS. This proposed project would bring together publishers, societies, vendors, and other stakeholders in a partnership to provide public access to the results of US-funded research, as required by the February 22 OSTP memo – just the sort of collaboration that will be increasingly important in future.

Perhaps most important though, is for all of us to be open to change. The move to OA publishing is a major change in and of itself, but it will only be successful in the long term if everyone involved is willing to acknowledge OA’s weaknesses as well as its strengths, and to ensure that it continues to evolve to meet the changing needs of all the communities it serves. To quote Kristen Fisher Rattan of PLOS, speaking on Open Access: Its Promises, Challenges, and Future, at SSP’s annual meeting: “we have to learn to adapt and survive”.

Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Discussion

73 Thoughts on "Open Access – Keeping It Real"

Alice, I appreciate your post. I have a comment / question relating to the issue of double-dipping. Increasingly, publishers are purporting to adjust pricing based on the number of subscription vs. OA articles in a journal – what Wiley calls the ‘variable’ portion in the document you point to in your post. Presumably the implied argument is that APCs are funding new articles while the number of subscription-funded articles is not decreasing and may even be increasing as well. But in the licensed subscription realm – I’m referring primarily to licensed journal packages, although this can be true for individual subscriptions as well – increased journal size (as measured by article counts) is generally not a factor in price increases. Typically, a large publisher license is subject to a straightforward negotiated increase across all journals, regardless of changes in the article counts of the journals. In fact, publishers will often tout increases in the number of articles published in their journals, despite a relatively constant or inflation-adjusted price, as a major indicator of improved value. Isn’t it quite natural then that subscribing institutions would expect to see their prices offset by new revenues that are now being used to underwrite increased article counts that were formerly absorbed by publishers?

Thanks Ivy. I’m not implying that APCs offset the cost of publishing additional articles, actually – they are charges to enable specific articles to be made publicly available. Rather we are reducing a proportion of our subscription price and overall collection to reflect the fact that these articles have been paid for under another (OA) business model.

Alice, could you remind us of (or point to) the details of your policy. For example, are you reducing list prices or all prices? What happens to customers who are in the middle of multi-year deals – will they see reductions in what they pay?

Thanks

sort of a side comment to your piece. I appreciate the nuances you uncover, but notice there isn’t mention of the reader finding OA articles, which is the point of the whole effort. I think there are serious deficiencies in actually knowing an article is OA and readable by anyone and other than a NISO initiative, I don’t see much evidence publishers are systematically trying to help readers identify OA articles that have been paid through APCs.a cute syjmbol (varhying by site) when you get to the article isn’t helping as the indexes, abstracts, and yes even google, should have an obvioujs indicator when the article is first presented as a target for readers..

Right. To pick an example relevant to my research interests, I’ve never found away to ask Science Direct “Show me all the open-access articles in Cretaceous Research“. (Does someone know of a way?)

Mike, as a discovery person I am interested to know why you would want that article list? How would you use it? To do research on OA?

The function Mike describes would be useful to anyone who doesn’t subscribe to Cretaceous Research and wants to see which articles from that journal are freely available to the public (without having to scroll through the ToC for every issue).

Exactly. Although I do have access to Cretaceous Research through my university’s institutional login, I have plenty of colleagues who lack this and would find it useful to browse the open-access subset of this and other hybrid journals.

(It turns out that there are only five open-access articles in this journal in total, so manually browsing all the ToCs would be a very unrewarding experience!)

But how is this knowledge useful Rick? I can see that it might be useful if one is doing research on OA (as I do) but otherwise not. If it is for OA research then the question is why we should spend the money to create this specialized discovery function?

If you are a cretaceous researcher without a subscription to Cretaceous Research, then you want to see the open-access articles in Cretaceous Research. Of course you do. How could you not?

Suppose that you have an interest in paleontology of the cretaceous period. There is good reason to believe that articles on this topic are published in the journal Cretaceous Research — but what if you can’t afford a subscription? In that case, if you could do a search that would result in a list of OA articles from that journal, this would likely be useful to you. (Because you want to read the articles.) Otherwise, you would have to scroll through the ToCs of every issue of the journal, looking for OA symbols. None of this has anything to do with learning about OA itself; it has to do with more easily locating interesting articles to which you have access because they happen to be available on an OA basis.

I think this is one of the great fallacies that is used to promote OA, one that misconstrues the logic of discovery. If you have this interest you are interested in all the science, the most relevant and important science, not just the fraction that happens to be OA. Moreover you are only interested is specific aspects, perhaps related to your work. So you start with the TOC on the publisher’s website. If you see interesting articles you then look at the abstracts to narrow your search. You do not start by reading all the articles, or few do.

Let’s say you find a few articles of important interest, so important that you want to actually examine them. Now and only now does it matter which are OA. You have no subscription nor library subscription (surely a small fraction) so those you can read immediately. The others take more effort. First check the author’s website. In some disciplines the majority of published articles are published there. You may also find prior work that is even more important than the article you started with. If not send the author an email requesting a copy. I have never had one refused. If the article is not new try Google Scholar for they often list repository copies.

The last thing you want to do if you are trying to see the science is to start with the OA-only literature. Thus I see no value in being able to aggregate that literature. PMC has the same problem by the way because they only link to their own literature. Once you enter you are stuck in a closed universe, that of NIH funded research, which does not represent the relevant science.

The last thing you want to do if you are trying to see the science is to start with the OA-only literature. Thus I see no value in being able to aggregate that literature.

Not everyone with an interest in the topic is trying to get a complete view of the whole discipline (or, as you put it, “interested in all the science”). Some people really are just poking around and looking for engaging articles on a topic that’s of interest to them — this will be more often the case in less-specialized disciplines, and probably less often the case in the more obscure ones. A search utility like the one we’re discussing would not be terribly useful to someone who is doing comprehensive research, but it could be very useful to someone who is investigating more casually.

good point thanks Chuck – that’s a good example of something we probably could and should do better through improved collaboration, which as mentioned I think will be at the heart of making OA work for everyone going forward

I must object to the remark that rock ‘n’ roll does not always live up to its hype.

If “rock ‘n’ roll can save the world”, so can Open Access 😉

As a librarian, I dread hybrid journals. How do you make them available to your users if you don’t have a subscription? You don’t catalogue them because most of the articles will be unavailable and that confuses and frustrates your users. But then they will miss the stuff they could be accessing. Similar problem with openurl resolvers – the knowledge base behind them can’t cope with distinguishing between accessible and non-accessible content at the article level, only at the title level.

Thanks Rachel – this is related to Chuck’s point about discoverability I think, and something I agree needs to be addressed. On the plus side, hybrid journals do enable more author choice, which I’m sure we all think is a good thing!

Rachel – as libraries move more and more to index-based discovery services such as Ex Libris’ Primo, Serials Solutions’ Summon, etc. there IS the ability to differentiate at the article level since all of the indexing is done at that level. This is content that has been loaded directly from providers [publishers and aggregators], so there is no need for the library to catalog the materials. That said, it would be excellent if there were a clearer indication to the reader during the discover process and I like the idea of a “cute symbol” as Check proposes.

The argument by some publishers that they (have been) or will avoid “double dipping” globally is not convicing. This could only be transparently varified if pubilshers would disclose their contracts with with every single library/consortium.
That is the reason why Science Europe explicilty recommends “…. publishers to apply institutional-, regional-, or country-based reductions in journal subscriptions, in line with increases in author- or institution-pays contributions”.

I think you are confusing two defferent issues. If by transparency you mean publishers should open ther books to you that is unrealistic, nor is it implied in the SE quote. That SE quote is basically asking for a rebate system where subscriptions are reduced based on specific APC contributions. That is unfair in the context of OA, which is about access not rebates. Everyone should see prices fall during the transition, not just those who happen to pay the most APCs.

Science Europe seems to favor APC OA while the US has rejected it in favor of embargo OA. This is a policy conflict of major proportions. The industry is caught in the middle because the rules are inconsistent, to the extent they even exist. Confusion personified.

a) Science Europe currently supports both Green and Gold.
b) Science Europe does not only support APC OA but also institution based OA (sometimes called: platinium OA). It makes sense for research institutions and funders to strengthen non-profit OA models without or with modest APCs, otherwise incentives for big publishers are very high to increase APCs excessively, see: http://www.eigenfactor.org/openaccess/

But these are inconsistent directions so nothing is decided. A journal cannot be green, APC, hybrid and free. These are simply strange bedfellows talking, each wanting something different. There is no policy here.

So, when you go new car shopping, do you want all the cars to include all the extras at one price…or would you prefer to select only the extras you want and only pay for those extras?

Sorry but I do not follow your analogy. It appears that SE is making prescriptions for different forms of OA, which is not an OA policy, but the bulk of these are directed at APC OA, hence my observation.

On the embargo OA side they want 6 months for the hard sciences but the US has opted for 12 months as a baseline, so even here we have a major conflict. OA is quickly becoming a global mess. This is not surprising since the movement is composed of inconsistent models. The publishers and the community have to deal with this massive policy confusion because science is global.

An alternative to trying to recalculate subscription prices based on geography and percentages of articles that are OA v toll, is to move to a pure usage-based subscription. I think a lot of publishers include usage as a component of their subscription price, but I’m not aware of any in which usage is the only factor. Simply stated, usage of toll-based content would count toward subscription pricing, where usage of OA content would not. As OA content increased, usage of toll-based content would decrease (probably not on a 1 to 1 ratio) and subscription prices would likely fall. To mitigate the risks of a pure usage-based model, bands of usage based on prior year(s) that are priced somewhat like today’s subscription models will be helpful.

There are certainly details to be worked through and each publisher would have to create their own bands for their own models, but this should eliminate the issue of double-dipping and relieve some of the concerns about hybrid models.

As I read it SE does not reject hybrids, just double dipping and Wiley has addressed that problem.

Funders and research institutions might have different OA policies, sometimes contradictory, but most of the confusions are caused by (some) publishers, on embargos, versions or hybrid, and that is definetly not by chance.
And how did Wiley solve the problem of double dipping? By reducing the subscription price globally? Do you mean that kind of oath given since years? Trust, but verify!

If you want the system — funders, institutions, publishers and libraries — to open their books to you then your demands are unrealistic, to say the least. The confusion is caused by a political movement (OA) with inconsistent models and goals which the system is struggling to deal with, while still doing its job.

I think the problem of “double dipping” is largely a matter of perception and trust. Many librarians have long complained about lack of transparency in the financing of commercially published journals, and over time they have come to distrust especially the largest commercial publishers. So, probably there is nothing much publishers can do to counteract the charge of double dipping. I have to confess that even as a former publisher myself I have had doubts about whether some commercial publishers are not using the hybrid approach simply as a way to increase profits even more.

As for embargoes, I wonder how important they are if most journals are bought as part of a package, like Project Muse? Would libraries stop subscribing to Project Muse if the embargo for journals in the humanities was mandated to be only six months?

Thanks Sandy. I think the issue is that most journals depend on their subsscription (rather than their license) income to survive and that is what is at risk with a short embargo.

Actually, while I was at Penn State, the income from Project Muse grew to become more than two thirds of the overall income for the dozen humanities journals we published, as i recall, and I wouldn’t be surprised if the proportion of Muse income has even increased since then. A smaller publisher like Penn State cannot easily sell subscriptions individually to libraries. I think it is hazardous to extrapolate from a big publisher like Wiley to a smaller publisher like Penn State.

I disagree Sandy. If we are going to transition to APC OA then there needs to be a systematic way to reduce subscription prices accordingly, becoming zero if and when APCs become 100%. It is a math problem. Trust is a separate issue.

Interesting post and comments! I would like to emphasize the need for clear open access indication in article meta data. This is of key importance for discovery, access and analytics. This applies to publisher platforms as well as any other place where articles are indexed for discovery and access, such as Primo Central. Not many publishers are providing such indication in their article meta data yet. As Carol Anne already pointed out, a new NISO group (http://www.niso.org/workrooms/oami/roster/) is working on recommendations for article meta data to include open access indication. This will hopefully help all the stakeholders to move this forward. The more attention we can draw to it the better. KBART (NISO/UKSG – http://www.niso.org/workrooms/kbart) also added OA indication to the amended recommendations that are due this year. However KBART works on journal level. Especially in the context of hybrid journals this can only be a partial solution. The focus is shifting from journal to article level and hopefully we see this soon reflected in all areas where articles and journals are managed, discovered and accessed.

I am curious what this OA metadata is good for? Certainly for tracking the progress of OA (a very small research area) but what else? No one doing scientific research wants to look only at OA articles.

Maybe not but if you are in a discovery system, you want to know which of the articles are available to you to view. It could be subscribed by your institution, in which case you have access. Or it could be open access. Subscriptions can be tracked on journal level and then there are of course journals that are entirely open access. It is easy to indicate the article from such journals as available and link you to the appropriate copy. But in the case of hybrids, the availability indication is on article and not on journal level. This is where it becomes difficult. This is coming back to the comment that Rachel Oldridge made yesterday about the difficulties she faces as a librarian to make such material avaible to her users. Meta indexes can solve this issue because they index on article level.

Alice, I don’t know the basis of Science Europe’s recent position statement but it might be an article published by JASIST which suggests hybrid uptake including Wiley’s has been quite low.

http://onlinelibrary.wiley.com/doi/10.1002/asi.22709/abstract

You indicated Wiley’s hybrid uptake has tripled recently. I’d be interested in knowing what are the actual percentages.

Also, while I can understand why publishers find funding agencies claiming there is no evidence that shorter embargoes will lead to cancellations irritating, as a social scientist I find referencing the ALPSP survey as evidence of anything pretty irritating.

Hi David, as far as orders for Wiley’s hybrid OnlineOpen option go, based on the first four months of each year, in 2013 these have grown by 395% compared with 2011 and by 267% compared with 2012. The number of OnlineOpen journals also increased during this period, mainly from late 2012. Immediately following the initial RUCK announcement last July, the number of online only orders jumped nearly 400% in August 2012 compared with August 2011.

You are right that the ALPSP survey is not very scientific and I should have said so in my post – but in the absence of anything else it’s the best we’ve got I think.

Thanks Alice. Sorry, I was not clear. From the data Bo-Christer Björk was able to piece together for his JASIST article, the uptake of the hybrid option for the larger publishers as of a few years ago was one or two percent. I was wondering what the current uptake of the hybrid option by authors.

Hi, Alice. I think it would be much more informative to know the number of OnlineOpen articles as a percentage of the total number of articles, rather the (impressive) percentage of year-on-year increases. (And I’d guess that that’s what David was asking.)

On whether short embargoes hurt journals: didn’t the PEER project have something to say about this? More appositely, surely the ultimate experiment in this field is JSTOR, which results in essentially all maths, physics and astronomy journals having a negative embargo period (i.e. the open-access manuscripts appear before the journal version). Isn’t it the case that journal revenue in those fields continues to increase despite (or maybe because of) arXiv?

Sorry – I misunderstood the question. The proportion of OnlineOpen articles we published increased from 0.4% of the total in 2011 to 0.8% in 2012 – very small still, as you say, but growing.

Re short (or even zero as in arXiv) embargoes, I agree these may be appropriate in some disciplines. In fact, free access to articles in repositories boosting usage on the publisher’s platform would be the ideal world, like higher attendance at football games due to more exposure on television. So we should continue to explore this in trials, but the ALPSP survey result was clearly a warning that this may not be true – at least not across all subject areas – something that is borne out anecdotally by our conversations with librarians.

The ALPSP survey was a useful exercise from the perspective of providing PR fodder for people who’d already decided what their position was going to be and were looking for something evidency to shore it up. But I am genuinely surprised you’d take it seriously as an actual source of data.

To be fair to Alice, she did say in the post above, “there has been no rigorous scientific study of the issue. Until there is, the only evidence I know of…”

But as a scientist, you should know that an argument dismissing a study because you disagree with its results is weak tea. If you can point to specific flaws in the methodologies used in the survey, and provide evidence to the contrary–that librarians are likely to continue to subscribe to journals with short embargoes, then you might have a reasonable argument.

Here, you’re just shooting the messenger.

I am sorry David, but the ALPSP survey was a terrible survey. I don’t want to belittle the point but it consisted of one very vague question:

“If the (majority of) content of research journals was freely available within 6 months of
publication, would you continue to subscribe?”

Does that mean 51% of the content or 99%? Is this a journal you are considering dropping anyway or one that part of an accreditation requirement for one of your programs? Do you mean the published version or the accepted w/o copy editing or formatting? Etc, etc.

I could go on but that is not the point. If ALPSP was serious about doing a rigorous study of this issue they could have done one.

Funder mandates are on their grantees, not the publishers. Obviously they could potentially impact publishers if they do result in canceled subscriptions but It seems to me it is up to publishers to do rigorous studies to address this question.

David, thanks. Comments like this are much more informative than a blanket dismissal of the results because one doesn’t agree with them. I fully agree with you that more rigorous evidence is needed in this area. As I understand the way the OSTP has set things up, a 12 month embargo is the default, and to change it (in either direction) requires a rigorous, evidence-based process. So I suspect we may see them in the near future.

Your point about the vague language used is an important one. The validity of a survey depends crucially on how it is conducted.

That said, even with the questionable phrasing used, has any evidence been presented that the basic conclusions drawn are wrong? I don’t think it’s far-fetched to assume, at least for some fields, that very short embargoes will indeed lead to some level of subscription cancellations.

On the other hand, as the discussion above suggests, if the library wants to continue to present the content, but without subscription, it will need a discovery mechanism that accesses the after-embargo articles. This is an obstacle to cancellation.

Thanks Mike, very helpful links. While Poynder is always informative, not everyone has the time to regularly read his lengthy articles.

But even ignoring the ALPSP survey, I think the point is inescapable–there’s some point where the embargo period that gets so short that it’s no longer worth the cost to the library to subscribe. They’ll just wait it out. It’s hard to know where that point is, and I think Alice is right–it’s going to vary for different subjects and likely even for different journals within a subject.

I suspect that if the OSTP policy goes into effect as planned, you’ll see a horde of publishers immediately lobbying for longer embargo periods by correlating the current ones to any downturn in business. To offer real proof though, they’re going to need to tie lost subscriptions directly to the shorter embargoes, likely doing a lot of direct interviewing of cancelling librarians. The tough question though, is how one would present evidence for a shorter embargo period. It would seem to me that most arguments are going to be speculative, like Alice describes, a try-it-and-see-what-happens approach. Not sure how that will work under the “evidence-based” procedure for changing embargo periods.

David, I think it is more complex than that. The compliance rate for mandates is less than 100%. The NIH is now 75% and that is the best I have seen. The only study on university mandates that I have run across is by Gargouri and his colleagues a few years ago and the average compliance with mandates was about 60%. If you figure most universities do not have mandates and I suspect few journals publish research from a single funder and in many fields external funding is rare. In short, at least in the near term, I expect very few journals will have anything like complete coverage in repositories. Also we are talking about accepted versions not published versions. There are also a significant portion of journals tied up in big deals. Just speculation but I find it hard to believe publishers are going to be hurt by mandates unless things change dramatically.

Great points David. But I think they’re somewhat dependent on assuming that the status quo is maintained. NIH compliance is around 75%, but much of that is due to publishers automating the process and depositing articles on behalf of authors. If a policy like the OSTP’s one is based on forming public/private partnerships, I would assume that publishers would do even more of this to help compliance levels climb even higher (and the CHORUS technology seems a pathway to achieving this).

Keep in mind that funders like Wellcome and the NIH have only just recently begun to try to enforce their policies in any meaningful way. So regardless of any change in policy, we’re going to see compliance climb as people are going to actually start losing their funding if they don’t follow the rules.

Further, I don’t see there being fewer funder mandates in the future. As these first few funders work out the kinks, I’d expect most, if not all research funders to follow suit. So even if a journal currently only sees a small percentage of articles funded by a group with a mandate, it’s likely that percentage is going to climb as more funders issue more mandates.

Regardless, I agree with you that this is all sustainable for publishers unless things are done in an extreme and unreasonable manner.

Isn’t it a bit premature to be talking about “the CHORUS technology”? Unless you know something I don’t, CHORUS is only a proposal, and nothing has actually been built yet.

We can’t talk about a technology or a proposal until it’s finished? Seems a bad way to run a project, not allowing any planning or feedback until it’s too late.

You’ll note I said it “seems” a helpful pathway, not that it actually was as of yet.

Of course I am not saying we can’t usefully talk about CHORUS! All I am saying is that the CHORUS technology doesn’t exist, so we’re only talking about the CHORUS method, or strategy. That is a fruitful thing to discuss, for sure; but I don’t want to give the impression that it’s further advanced than it is.

So it’s just a semantic thing then? Had I said, “proposed CHORUS technology seems…” that wouldn’t have raised your hackles? I thought that was implied, but fair enough.

“So it’s just a semantic thing then?”

Yep. No hackles, just trying to avoid misunderstanding.

100% agreement on Richard. His posts are packed with important information, but just intimidatingly long. I recently had an exchange with him on Twitter in which I suggested he provide abstracts, but he was surprisingly (to me) resistant.

Here is the real issue with Green OA as I see it. What it made available is the accepted manuscript, i.e. the product of the work of the author and reviewers, before the publisher has made any significant contribution. What eventually appears in the journal has benefitted from publisher services such as proofreading, copy-editing, typesetting and journal branding. That is the value that the journal adds, and that is what libraries legitimately pay for. It seems to me that if that added value is worth the price of the subscription, then libraries will pay it whether or not the unformatted MS-Word file is available on the Internet — even with zero-length embargoes. And the experience of the maths and physics journals whose papers all appear on arXiv bears that out: they are doing just fine in the presence of zero-embargo manuscripts.

If, on the other hand, libraries would prefer to use the Green OA versions of articles than pay the subscriptions, then that looks like evidence that the value the publishers are adding is not valuable. And if that’s the case, then it’s right that the Green OA manuscripts should be used instead, until such time as the publishers either start adding more value or reduce their prices.

Is there any part of this that you’d disagree with?

So you’re saying the peer review process, where the paper often goes through multiple rounds of revision has nothing to do with the journal or the journal’s editors? All that has already happened by the time you get to the author’s accepted manuscript. Is there no value in that? How does that get paid for?

As David Solomon has suggested in a comment above, at the moment there’s a pretty significant difference between the sum total of the papers released via repositories like arXiv and the total number published in journals. Using only arXiv would cause one to miss out on many papers. That may help explain why physics and math journals continue to do fine (and perhaps there is indeed some perceived value added via the peer review process).

And for the record, OUP deposits the final, published version of papers in PubMed Central, not the author’s manuscript.

I think Green OA may be perfectly adequate for some purposes, such as teaching using Green OA versions as class assignments. But for purposes of scholarship it seems only appropriate that the final published version be used for quotation and citation. Thus the Green OA versions may substitute for some purchases, such as those involved with coursepacks and e-reserves to the extent that they exceed fair use, but not necessarily for subscriptions.

So you’re saying the peer review process, where the paper often goes through multiple rounds of revision has nothing to do with the journal or the journal’s editors?

Not at all. Of course the important work of the handling editor is associated with the journal. But I would say that for the great majority of journals, which rely on volunteer academic editors as well as volunteer peer-reviewers and of course volunteer authors, it has very nearly nothing to do with the publisher. With the exception of those few journals whose publishers pay professional editors to handle manuscripts, a peer-reviewed, revised manuscript is almost 100% the result of researcher labour, and the publisher at that point has made a negligible contribution. It’s only after this stage that the publisher adds significant value.

Which is why I never understand why publishers seem to be so afraid of a bunch of unformatted double-spaced MS-Word files on the Internet.

There’s more to it though, than just a bunch of volunteers stepping up. The process takes place through expensive to build, maintain, improve systems. Editors must be trained on those systems. And the editors themselves don’t do a lot of the work to make things happen. Managing editors, editorial assistants and others are the real drivers behind the scene. These people are not volunteers and a good managing editor is worth his/her weight in gold. There’s a lot of mundane and tedious work behind the review (and the publication) process.

Rubriq suggests that process is worth $700 per round of review, pass or fail. Figure the majority of papers go through more than one round of review. F1000 Research charges $1000 just to post your preprint online and maybe get you a few one word reviews. Clearly there’s more going on here that’s beyond “negligible”.

Which is why I never understand why publishers seem to be so afraid of a bunch of unformatted double-spaced MS-Word files on the Internet.

Done properly, there’s nothing to fear. Though the ideal situation is to broaden access to the version of record, to keep everyone (literally) on the same page, to get everyone access to the improvements a journal adds (think of all the extra features like altmetrics and article level metrics PLOS builds into their papers) and to updates, corrections and retractions. That may mean sending people to the journal version preferentially over the repository version, but if there are sustainable ways of doing that (Green or Gold), all the better.

I hope this ends up in the right place. I am referring to the quote below by Mike Taylor.

“Not at all. Of course the important work of the handling editor is associated with the journal. But I would say that for the great majority of journals, which rely on volunteer academic editors as well as volunteer peer-reviewers and of course volunteer authors, it has very nearly nothing to do with the publisher. With the exception of those few journals whose publishers pay professional editors to handle manuscripts, a peer-reviewed, revised manuscript is almost 100% the result of researcher labour, and the publisher at that point has made a negligible contribution. It’s only after this stage that the publisher adds significant value.”

Mike, I agree with David Crotty that publishers do a lot of valuable stuff even when there are volunteer editors. I started and journal with some colleagues and ran it for 12 years. I think we did a pretty good job and managed to get it into PubMed without charging authors or readers up until the last year when we had to charge a $100 APC to get enough money to get it the articles converted into XML that PMC would accept. While it was possible, we spent a huge amount of time doing the things publishers do, not nearly as well probably taking 3 times as long as professionals who know what they are doing. We finally gave up and turned managing the journal over to a really good professional OA publisher. They are charging a $1,000 APC which bothers me but much of that $1,000 was coming out of our hides and we were not doing anywhere near as good a job. I think PeerJ as shown it doesn’t have to cost a fortune if a publisher can be creative about doing the publishing tasks very efficiently but it does cost something. For journals that are very selective and go through several rounds of revisions and do real copy editing, it’s going to cost considerably more.

Would you say the same thing about the publisher’s role in the editing of books, Mike? In that arena, too, publishers rely heavily on the expert opinions of external reviewers, who are paid, but only modestly; it would not be a stretch to call this (mostly) volunteer labor also. However, acquiring editors add a considerable amount of value, I would argue; in an article I published back in 1999 I identified nine distinct roles that they play in the process of scholarly communication.

There’s more to it though, than just a bunch of volunteers stepping up.

Perhaps a little more.

The process takes place through expensive to build, maintain, improve systems.

Not really. There are plenty of free systems for handling the peer-review process, including Open Journal Systems, Annotum and the PeerJ system.

Editors must be trained on those systems. And the editors themselves don’t do a lot of the work to make things happen. Managing editors, editorial assistants and others are the real drivers behind the scene.

That does not tally with my experience. After the author and reviewers, the handling editor does the great majority of the work prior to acceptance. (Of course lots of other publisher people come in at that point; but remember we’re talking about the creation of an accepted peer-reviewed manuscript here.)

Rubriq suggests that process is worth $700 per round of review, pass or fail.

I’m not sure what Rubriq is selling, at this point, nor who is buying it. It seems the main value they’re offering is the freely contributed time of peer-reviewers. I remain to be convinced by their model, and I certainly wouldn’t adopt it as a baseline for evaluating others.

Figure the majority of papers go through more than one round of review. F1000 Research charges $1000 just to post your preprint online and maybe get you a few one word reviews. Clearly there’s more going on here that’s beyond “negligible”.

Again, I question the value that F1000 provides, and I don’t (at least yet) see a rush to use their service. I do like their model, but I think it’s going to be very hard for them to justify their price.

Which is why I never understand why publishers seem to be so afraid of a bunch of unformatted double-spaced MS-Word files on the Internet.

Done properly, there’s nothing to fear. Though the ideal situation is to broaden access to the version of record, to keep everyone (literally) on the same page, to get everyone access to the improvements a journal adds (think of all the extra features like altmetrics and article level metrics PLOS builds into their papers) and to updates, corrections and retractions.

Yep. These are all good reasons to believe that the journal’s final published version is superior to the accepted manuscript, and therefore reasons not to fear zero-embargo Green OA.

Sorry, Sandy, I know nothing about the book-publishing process, so can’t offer an informed comment.

I’ve been following this conversation with interest – thanks for all your comments. I still believe we are in a period of experimentation and that it is too soon for any of us to be able to say with certainty what works and what doesnt in an OA world. What succeeds in one community may fail in another, what is valued by one group of stakeholders may not be by another, what enables one journal to thrive may kill another. So there is a definite need for more rigorous research on this (which, as David S says, could certainly be carried out by publishers) – it’s good to see some funders starting to acknowledge this as well.

Mike, I find your comments about Managing Editors to be ignorant and slightly offensive. Many Editors and Managing Editors work extremely hard in the pursuit of improving scientific publishing and science as a whole while receiving sweeping negative feedback.

To overlook the peer review process in one sentence and say there is no value-added also makes me wonder where your expertise in this arena comes from. Did you find reviewers yourself for all your publications? How much time did that take and on your hourly rate what did that equate to? If you didn’t, why not?

OA has made doing peer review well financially questionable. Many people focus on the cost of one paper. How much does it cost for one manuscript start-to-finish. Let’s say the cost is (for argument’s sake) $1000 and the publisher charges $1500 publishing costs. A net profit of $500. That’s the most a publisher can make on that paper. Ever.

You could say “that’s great, a publisher shouldn’t even be making profit”. Personally I don’t agree. If there wasn’t value in publishing everyone would just post their findings to their own website, but I digress.

The trouble with the OA model is that every item that comes in to a journal to be reviewed adds to the cost of running the journal, yet the margin on one published paper will never change. You are paying to run a review process on papers that may not be accepted and cut into the fixed maximum profit of those you do. So there two options. 1) run a very minimal editorial process, or 2) be more selective about what your review to provide a better service.

Both are viable options and there is nothing wrong with either. But performing both is not a viable option. You will find that most OA journals that offer great editorial service are running off the infrastructure of larger publishers who are earning the majority of their profit from subscription revenues.They are subsidized.

My point is that you can not say the only value-add a journal makes is typesetting and publishing on a website, it’s not. If your experience is of journals and you received minimal editorial input on your manuscript, that may be because it is an OA journal, not because there aren’t publishers out there providing great service.

David – I think Christine may already have covered this, but just to reiterate: one answer to your question: “I am curious what this OA metadata is good for? ” is the following. If a consistent OA indicator were available in metadata feeds, when designing the indexing service of a Discovery System, one could build in logic such that if the article being indexed has an OA indicator, you know the system is always able to display that full text to the end user of the system. For articles without an OA indicator, the system would need to rely on information about the institution’s subscription/holdings info to know whether or not the full text can be displayed or only the metadata/A&I. As others above have pointed out, given the growth in discovery systems right now, I think this would be a valuable capability.

Some publishers are already indicating OA status in their A&I feeds, but not in a consistent way, and not across the board. Hopefully the NISO OA initiative will help get us to that point!

Comments are closed.