Back in 2012 when we asked the Chefs, scholarly publishing was shifting (or had already shifted) from being an industry based on selling products to being service-based. The successful transition of books and journals to the digital realm continues to create new needs for the research community and thus, potential new services and revenue streams for publishers. Throughout the internet, convenience has become an important selling point. Some will always opt for the more difficult “free” version, but for those for whom time and effort are valuable, paying for convenience makes sense. A new pilot study performed by Elsevier and the University of Florida’s libraries offers an intriguing glimpse at a potential new service.
Elsevier and Florida’s project looks into ways that publishers and libraries could partner together to automate and expand the content found in institutional repositories. As with anything Elsevier does, the immediate online reaction was largely negative, laced with the usual suspicions about Elsevier’s motivation. For many, it’s almost impossible to objectively judge anything that Elsevier does. This is a bed that Elsevier has made for itself, becoming the prime target for all the sins of the publishing industry through its own past missteps and continuing controversial business practices. But the kneejerk responses have reached the point of absurdity (a favorite recent example being a suggestion that Elsevier had spent millions of dollars to buy SSRN so they could deliberately ruin it). Elsevier didn’t achieve its dominant position by being dumb, and while it’s amusing to picture Tom Reller going to work every morning at a secret volcano lair, any move they make is worthy of more serious analysis.
Elsevier is making major moves away from relying on content sales and toward a future based on selling services to the research community and others. The glaring need this potential service meets is the generally unsatisfactory performance of institutional repositories. Despite years of effort, key problems continue to plague institutional and funding agency repositories — significant effort and overhead for universities/funders, and poor researcher compliance resulting in a mostly empty storehouse. The Elsevier/Florida project, while not a complete solution, suggests how these problems could be solved.
Compliance issues are difficult and often expensive to overcome. Essentially, researchers don’t see depositing copies of their journal articles in repositories as a priority, and as we know, researchers won’t do anything they aren’t absolutely required to do. Since most repository policies (in the United States, at least) offer an easy opt-out choice and very few have any sort of monitoring and enforcement schemes in place, the easiest path for a researcher is simply not to bother. Deposit is a distraction from the real work of research, and consumes valuable time and energy that could be spent more productively elsewhere. Until this year, the repository at the University of Florida contained a grand total of seven articles published in Elsevier journals — a tiny fraction of the roughly 1,100 articles that Florida researchers publish in Elsevier journals each year and that are eligible for local deposit. Where universities have put in place careful programs of monitoring and enforcement, the costs are often staggering.
The obvious solution is to take the decision out of the researchers’ hands, and make deposit an automated part of the publication process. At a recent meeting, Neil Thakur from the NIH shared data showing that when the ACS deposited articles into PubMed Central on behalf of funded authors, compliance levels were above 90%. When the ACS switched to a policy where authors were left to do it themselves, compliance dropped to around 50%.
There is an obvious service to be offered here that is clearly desirable for institutions and funding agencies — if they want well-populated repositories run on a cost-effective basis, publishers can make that happen.
However, publishers must be sufficiently motivated and a business case must be developed. One of the biggest stumbling blocks is that repositories draw away essential traffic from journals, and it’s hard to support a peripheral service product that undercuts one’s primary revenue source. Elsevier’s concept here solves that problem by combining the two and making the journal itself the repository. Elsevier identifies papers with authors from the university and the metadata from those papers powers the university’s discovery portal, pointing readers to the papers in the journals. The number of Elsevier-published articles that can be found through Florida’s repository has now grown from seven to over 31,000.
Readers at the University of Florida, which subscribes to the journals in question, have access to those articles, as would subscribers elsewhere, and of course, any open access articles are also available to all. That would seem to solve the internal access needs for a university repository, but many are equally interested in external, or public access to the university’s published works. Since most publishers have implemented policies allowing the display of at least some version of papers after an embargo period to meet Green open access requirements, this could be a useful next step — send the reader to the best available version of the paper (hosted by the journal) to which they have access.
The other big problem here for many is the question of ownership and trust. This remains a fundamental question as we continue to move into a digital, distributed market. Though streaming services are rapidly replacing the purchase of media, librarians must think of the long term, and of permanent archiving. Many librarians see publishers as adversaries, and don’t trust them to follow through on promises of perpetual access.
This is not an insurmountable problem though, and legally-binding guarantees and ironclad mechanisms can resolve such concerns. CHORUS, for example, guarantees perpetual public access through the use of trusted third party archives like Portico and CLOCKSS. Should the publisher-hosted version ever cease to be publicly available, the archived version comes to light. Libraries in such an arrangement would be free to collect copies of all identified papers to build their own dark archives, or better yet, automated deposit in an institutional dark archive could be part of the paid service (with conditions specified that public availability of those copies is not allowed as long as the publisher lives up to its promises). If the articles are guaranteed to be perpetually publicly available, does it matter where they are hosted?
In many ways, this is a win:win. For publishers, a new revenue stream is created. Traffic to the journal is retained rather than lost to third party repositories, those visitors count in COUNTER statistics, can be shown ads, and can potentially be upsold to read other journal content. For readers, access to the best available version in context is superior to getting a pdf or Word document from a third party source — if there are any related articles or editorials, these become evident. Is the article part of a special issue or section? Perhaps most important, if there have been any corrections, updates, or retractions of the article, the reader can see them. This sort of information, along with important additions like links to datasets, protocols or open peer review reports is regularly displayed on journal websites, but unlikely to be included in a one-time deposit of a pdf into a repository.
For librarians, a low-effort, high-compliance solution is offered for problems that have dogged repositories for years. Rather than continuing to put further burdens on researchers (and hoping they’ll one day reverse their behaviors), paying for a service that routes around this particular roadblock may be worth the cost.
Though not yet the complete soup-to-nuts solution, Elsevier is onto something intriguing here, with wider implications for the industry as a whole. The real question to be answered by stakeholders is the balance between functionality and control. If you get exactly the solution you want, does it matter whether you own it, or even if it’s owned by someone you consider an adversary? Is there so much animosity and distrust that it overwhelms the benefits offered? Is it better to have a barely functional system over which you have complete control?
The answers will vary, depending on the stakeholder and where it sees its mission and priorities. For those where the benefits of a low-effort, robust repository outweigh the “who controls it” questions, the concepts here are worth further exploration.
Note: It has been brought to my attention that I may have been unclear — this pilot with the University of Florida is not a paid service, and the University is not being charged for the experiment. Any suggestion that the service could over time evolve into a paid service is speculation on the author’s part.
20 Thoughts on "Paying for Compliance: A Potential Path Forward for Institutional Repositories"
Sounds like a useful service, but if the University has to contract with every publisher that publishes its faculty authored papers this could add up financially, as well as being complex. Perhaps an aggregator like CHORUS is best.
Also, CHORUS is free, to the US Public Access agencies anyway (and Elsevier is a member). Asking the Unis to pay for something the Government gets for free may be a problem. Then too, in the US case there is a 12 month embargo period. Would the repository service include an embargo for non-subscriber access, or are the publishers giving away their primary product?
Discussed further in the link in the second paragraph of the post above:
I think this speaks to the idea that different libraries/funders/institutions have different goals and missions that they’re trying to serve through their repositories. For some, the suggested solution won’t serve those goals, for others, it may greatly accelerate them.
But I’m not sure what institutional goals this project meets. Providing another discovery tool for the publications? I guess that’s useful.
It’s my understanding that Florida does not subscribe to all Elsevier journals. And if a Florida researcher publishes in one of the journals they don’t subscribe to, s/he won’t have access through the repository record. Though I would if my institution subscribes. They’re not seeding the repository with content or even useful links to content, only metadata.
I must misunderstand the agreement. Otherwise this looks like a lot of work for no real improvement.
We should have a post with some further comment from U. Florida coming in the next few days, so stay tuned. Also keep in mind that this is just a pilot, perhaps more of a “proof of concept” experiment than the final finished end-product of the work.
The institutional goal is specified in the title. It is minimizing the cost of compliance, a cost that repository advocates tend (prefer) to ignore. I used to teach regulation writing to regulators and my primary message was that behavior, not words on paper, was your goal. If publishers can deliver compliance with repository mandates, how much is this service worth? Repository advocates prefer not to answer this question, especially if they want publishers not to exist, as many do. But the question is painfully real.
That is quite a manifesto by MIT. A good example of the attack on scholarly publishing, in favor of some nebulous other world. But the quote from the University of Florida regarding the pilot project is puzzling. It says that the project addresses “…facilitating compliance with US policies on public access to federally funded research.” I do not see how it does this since the various federal agency Public Access programs all require delivery of the article to a Federal repository. How it is accessed from the University repository is irrelevant to compliance.
The issue of automating compliance has been discussed for a while,and it is something I agree with. A way to achieve this is described on slide #9 of one of my recent talks: http://www.slideshare.net/TorstenReimer/automate-it-open-access-compliance-as-byproduct-of-better-workflows
However, I should note that the approach put forward by Elsevier does not help with funder compliance in the UK. We require deposit of the manuscript into an open access repository. Just “embedding” it does not achieve this. I have made this point repeatedly in discussions with Elsevier. However, we keep receiving suggestions to put effort into embedding “solutions” into our repository that do not solve problems of our academics. We did have more promising discussions on receiving metadata close to acceptance through an *open* API, but at this moment it is not clear to me whether this will actually happen.
So at this point I do not see an actual solution to the compliance issues we face. Regarding your question about control: For my university some ~£100m of annual funding depend on meeting the deposit requirement. This makes it very risky to leave control to a third party, unless there is a solution that actually delivers and that comes with clear guarantees and assurances. Currently it is very hard to see that our academics would trust Elsevier to deliver this in a partnership that does not create future issues for them.
We remain open to discussing these issues, but that requires a proposal that is an actual solution.
The Florida model integrates publisher and university workflows. If the UK rules do not allow for this then perhaps they should be changed, upgraded as it were. The coming Brexit transition from EU to UK funding might be an opportunity to make some changes.
These are UK specific rules that have nothing to do with the EU. The UK has the strongest open access mandates and EU states are years behind the UK in this respect.
The Florida model does not support UK university workflows. Why should the UK change its mandate to match what a specific publisher is offering if it does not meet requirements?
In no way do I think that the UK should change its requirements (just want to be clear). I am curious though if phase two of this project, in which FL gets the author manuscripts from ELS, would meet the UK requirements. I’m relatively agnostic on phase one of this project (though it does seem to me that Florida is getting some data from ELS that others are paying ELS for as part of PURE and if other publishers did the same it would potentially eliminate the need for duplicate payment for a campus RIS system) but phase two seems a really exciting opportunity to ensure that deposit of manuscripts happens.
I think t depends on which UK requirements you’re talking about, as HEFCE’s differ from the RCUK’s. This would seem to help with HEFCE, but RCUK has a preference for Gold OA.
In the UK, HEFCE’s requirements have effectively superseded RCUK’s – especially as meeting HEFCE OA requirements will also meet RCUK’s requirements. Meeting HEFCE requirements is what the universities work towards.
When we discussed this with Elsevier they stated clearly they have no plans to deposit author accepted manuscripts directly into university systems. If that views was changing I think UK universities would be interested to hear more about what Elsevier are proposing.
The connection with the EU is that under Brexit EU funding of British science has to be transitioned to UK funding and a great deal of money is involved, perhaps a billion pounds a year. This will probably be a major transition in British science, so it is a good time to refine or even rethink the OA rules. (I happen to be studying this transition.)
So far we do not know how Brexit will actually work, although I agree it will probably be a major transition. However, I see no reason how that would change HEFCE’s policy to require deposit of manuscripts; if anything there will be less money for research which will make green OA more attractive then continuing to fund hybrid OA. Therefore I am not sure it is relevant in this context.
We have just identified one obvious reason to change the HEFCE deposit requirement, namely to allow for something like the US Public Access model, which uses CHORUS and links to the publisher’s website for access. Plus if there is a major short term shortfall of funding, which is likely in my view, then compliance costs become an issue. What is the least-cost OA model? This is the question that needs to be asked.
APCs certainly take a hit in this case. If the universities are struggling just to pay salaries, or laying people off, then they will not have funds for APCs. It could be a wild ride.
I agree that hybrid open access would be the first likely victim. In fact, at Imperial College we made the decision last year to no longer fund hybrid from (oversubscribed) College internal OA funds. If RCUK funding for OA was cut short, extending that decision to the RCUK fund would be the logical next step for us.
Whether funders and universities would agree to accept a model where publishers provide the peer-reviewed manuscript through their own infrastructure would in my view depend on whether publishers offer a solution that guarantees permanent access to the manuscript for everyone, in a funder-compliant way. It would also have to be a cost-effective solution. Considering that articles are a key output for universities, institutions may still want to have a copy in a system that they control though, and may well be prepared to pay a little extra to ensure that. I think universities would also be concerned that, for example, should they decide not to pay subscriptions to a publisher anymore the publisher infrastructure would still guarantee free access to the manuscript.
In the UK discussions at least I am not aware anyone has proposed such an infrastructure.