Cameron Neylon

Open access (OA) scholarly publishing is a contentious subject. The loudest, most-readily heard voices are often those from the extreme ends of the spectrum, those lost in unrealistic idealism or those mired in the mundane details of running a business. Pairing these extremes leads to interesting, but ultimately unproductive conversations.

Instead, we might turn our attention away from the shouting, and focus on those somewhere in the middle, a group more interested in finding real-world, pragmatic solutions to translate idealism into functional publishing models. Cameron Neylon has, for several years now, been one of the most thoughtful and thorough proponents of OA and opening science in general.

Trained as a biophysicist, Neylon has been at the forefront of thought in understanding the way new means of communication interface with scientific research. He’s recently taken a new position as PLoS’ Director of Advocacy, and will concentrate on both immediate improvements and on long-term strategic development.

I don’t always agree with Cameron, but have great respect for his approach, his willingness to listen, and his ability to drive high-minded goals in realistic and achievable terms. At the height of the Research Works Act (RWA) furor, I asked Cameron if he’d be willing to do an interview with the Scholarly Kitchen, as I felt it would benefit our readers to see a different side of the OA movement than is often portrayed here.

Due to our busy schedules (including the announcement of Cameron’s new position), it’s taken a few months to get this together, and we both apologize for the delay.

Q: In a blog posting about the RWA, you predict a near future where all major funders will require Creative Commons licensing and full OA for any publications arising from their funded research. But your vision is not of a world without publishers; instead, it’s one of new opportunities, as you see new players arising to meet the community’s needs. What are these new opportunities likely to be, and what areas and services should current publishers focus on to better serve this changing landscape?

A: The core effect of the Web is that it makes a need for publishers (in the very narrow sense of “organizations that make things public and disseminate”) simply go away. We have traditionally bundled lots of functions together in the organizations we call “publishers,” and the question we face is the choice of which of these services we want, and indeed which ones we can continue to afford. At the same time, there are a whole new set of user needs that arise from the sheer scale that arises when publishing (again in the very narrow sense) becomes so cheap.

I’ve argued often that the traditional mode of filtering, pre-publication, by blocking the appearance of some works in specific channels, doesn’t really add any value, or at least doesn’t provide a good return on investment. But we clearly can’t abandon filtering. It is at the core of coping with the information abundance. So the core service that those organizations that used to be publishers can provide, and which has real value, is discovery — the services that validate, collect, index, summarize but above all bring the right content to me at the right time. The idea of overlay journals might be a good stepping stone here, journals themselves don’t need to go away, collections of works are useful, but I think its a stepping stone towards providing the infrastructure that will allow anyone to collect works together and act as an editor or curator.

The other big area where there are massive opportunities is to sort out the back end. Getting our current literature properly organized and properly searchable would be a big step, as would be thinking about collecting and indexing other kinds of research outputs. And in an author-pays world, these are valuable services that I think authors will be willing to pay for. In a sense it is academic SEO (search engine optimization), but if the interest of the researcher is in ensuring that their work gets the widest possible play and if enabling that is a value offering from publishers, then my sense is that it helps align everyone’s interests more effectively.

Q: In that same article, you suggest that, “several major publishers will not survive this transition.” If anything, OA seems to favor economies of scale in lowering per-article costs (e.g., the success of PLoS ONE as compared with PLoS Biology). This would suggest that an OA mandate would lead to market consolidation around the major publishing houses that can offer that level of scale. Can you elaborate on why you think major publishers would instead fail and where does that future leave the smaller, independent publishing houses and the self-publishing societies?

A: There are two big issues here — one is economies of scale for those publishers who make an orderly move (or start off) in open access article processing charge (APC)-based models. There is actually a really interesting question here because scale, as it spreads across disciplines, also creates some problems. It’s much easier to automate the handling of a conventional peer review process if it has a narrow scope. The best case of this is the IUCr Journal Acta Cryst E, which runs a surplus on an APC of around $150 an article. It does this because it accepts only one form of article, descriptions of crystal structures. The authors conform to a form-based authoring process, and a lot of the technical validation is done by computer, which makes the human peer review process much easier and cheaper to manage.

Because PLoS ONE and other wide scope journals are covering wide areas of research, they necessarily need processes that can cover many data types and different disciplinary approaches, and these are still human, and therefore relatively expensive, processes. It is also fairly difficult at scale to place as much reliance as you might like on community peer pressure to contribute — and this may be a real advantage that society and independent journals have — a close knit community can effectively run shoestring budget journals such as the Journal of Machine Learning Research. With the improvement of open-source publishing platforms getting to the point where they are both very powerful and quite useable (probably not quite useable enough yet — but getting very close), I think we will see a resurgence of community- and society-based small journals and publishing houses which I think will offer a very interesting competition to the mega-journals. I would very much like to see the conversation around OA for societies shift from it being a threat to it being an opportunity to focus on peer review as their core community service.

But the issue of scale also brings institutional inertia. And this is what I was really referring to in the article. My sense is that organizations like Elsevier, Wiley, and the American Chemical Society have so much institutional inertia built in that it is very difficult for them to even think about the kinds of change required to adapt to this new world. There has been next to no innovation around business models from these players; all of that has come from the new players, largely BMC, PLoS, and Hindawi.

Q: Are there areas of compromise that could help speed the acceptance process for OA mandates? For example, many publishers would be much more supportive if they were allowed to serve the free versions of papers, rather than losing that traffic to PubMed Central (PMC). Is that a reasonable request? Are there ways we can continue to experiment to find the appropriate lengths for embargoes?

A: I disagree with embargoes on principle — really, they are a compromise within a compromise en route to sorting out the issues of how we pay for the set of services we need for good research communication. What we really need is a grown up conversation about how we can set up a market for those services that works and enables a transition for everyone involved. The question of who hosts “the” free version is peculiar to me — if publisher think they can do a better job of that then they should, and demonstrate that by successfully competing with other sites that host that work. But funders and others have good reasons for not trusting subscription based publishers to do this properly — every time NPG does a system update, the access systems manage to forget the custom settings that make genome papers freely accessible. No one is being evil here, but the defaults are all set to limit access — doing anything else is non-standard. Show us you can do this properly and well, then there’s a discussion to be had. But at the same time, funders are always going to want to keep a copy themselves, it’s just good practice.

What we need to figure out is the stepping-stones that will get us from where we are to where we want to be. There are, I guess, three possible routes here. The first is the one that we seem to be stuck in. Funders propose another step up in policy, subscription publishers get upset, funders row back slightly, and then implement, publishers grumble but accede. The second route would be a real collaborative exercise. The Finch report in the UK is an effort in this direction although I have concerns about how successful that will be. The third would be for some traditional publishers to really step up to the plate and offer something exciting, perhaps entirely different, but in any case a real step forward going beyond the current round of debate. This just really hasn’t happened.

Fundamentally there just isn’t the level of trust either between subscription publishers and funders or between those publishers and the OA movement. And without that trust it is difficult to see how these kinds of conversations can happen effectively. From my perspective the bottom line is that it is much more effective to lobby funders to move that ratchet step by step.

Q: In a recent blog posting, you discuss the tremendous advantages offered by networked systems. It’s easy to see in terms of the examples used, which feature abstract subject matter (mathematics) and easily digitized datasets (images of the sky). How do those advantages translate to research where there is a need for physical work which can’t be as readily distributed; clinical research that requires seeing patients, or wet-bench laboratory work that involves hands-on testing of cells or tissues?

A: Yes, this is exactly the challenge, but I think its promising that we see good examples exactly where we’d expect to see early successes. That means we’ve got a reasonable understanding of what is going on. So the key points I make in that piece are that these advantages arise in systems where you have well connected networks with very low friction to the transfer of resources. For information, the Internet is incredible in both the scale and lack of friction for the transfer of digital information resources. So the question in transferring this into the physical lab world is; can we build that scale, and can we make transfer more frictionless?

In terms of the scale we can, because the major issue at scale is discovery — and we can do that with metadata and information resources. So I can discover that someone somewhere has exactly the plasmid, protein, sample, mouse, cell line that I need as long as that information is available somewhere. This is clearly technical possible (one of the arguments behind open notebooks) but equally culturally difficult as it gives away information on what people are currently working on. But if we could create the right incentives then the discovery part is relatively easy.

Currently the actual transfer of the material has a lot of friction. Institutional obsession with non-standard material transfer agreements is one example. We could reduce this by setting up standardized agreements. Physical storage can be a problem as can transport but in many cases we often have centralized infrastructure for storage and delivery (e.g. for cell lines, animal strains, plasmids etc.). This infrastructure is often very poorly funded of course. But the bottom line is that we could do a better job of this if we chose to address it. And we don’t need to reduce the friction to zero, we can get significant benefits from each reduction we can achieve.

There have been some interesting efforts in trying to provide private infrastructure to support public research in this space, letting researchers register their materials and capabilities for sale. This hasn’t taken off yet but I’ve recently been working with a completely outsourced biotech company–one where there is one person and a laptop at the center distributing a set of testing and development tasks for the building of a prototype medical device. I’m also seeing interesting signs of collaborations that are spontaneously developing online between grad students and postdocs, discovery that someone somewhere has the instrument, techniques, or skills needed to solve their problem. We have to expect this to be slower than in the digital information space because there are more significant local costs but that doesn’t mean that there isn’t value to be obtained. And the smart people will figure out how to turn that to their advantage.

Q: In that article, you suggest the research paper is part of a continuum of data sharing among collaborators. Is that the true purpose of a research paper? During my research career, I read many papers in subject areas where I was never going to work. I didn’t care about work in progress; what I wanted was an efficient summary of work that was completed, an understanding of what was already known. If we’re to build a new system that serves the communication needs of researchers, can that one system satisfy both the needs of information sharing for works in progress among collaborators and the needs of historically documenting completed research? Is it better to separate the two and create systems optimized for each? Or is this an artificial separation?

A: I think I feel that’s an artificial separation, in part because I have a strong sense that science is never finished. Given recent reports that some horrendous proportion of widely cited published biomedical experiments couldn’t be reproduced inside big pharmaceutical companies, I find it difficult to think of a single paper as anything but an artificial construct that gives the impression of being a whole story — but is actually only a piece of it. Thinking about it, this colors a lot of my thinking, the distinction between that artificially closed narrative that we need to create for a rhetorical purpose and the “real situation” which is always much more fluid and incomplete.

That said we clearly need summarization and integration at lots of different levels. In a weekly lab meeting the PI doesn’t want all the details, but wants more than will end up in a traditional paper. If I am looking at an area for the first time, I probably don’t want the most recent paper, I may not even want the most recent review. In practice I will usually start with Wikipedia, head for a few websites from there, and then dig into reviews. We have layers upon layers, with different levels of confidence, and completeness. But my problem is that currently all of these layers are constrained into one kind of structure, which carries with it a certain set of assumptions about indexing and discovery, but which are often orthogonal to the actual problem I have at hand.

I much more frequently need to know, “Has someone tried this? How did it go?”, than what their scientific story is – or even what the question was they were trying to answer. Day to day there are lots of different information needs and many of them could be served in principle by access to an underlying layer of more immediate but incomplete information. And a lot of this ends up unpublished so we never reap any benefit from it.

If you follow the logic of the network opportunities, then the system will be at its global best when everyone shares everything instantly but there are good mechanisms to support discovery and trust (being unable to find the thing you need when it does exist is another form of friction). This raises two immediate problems, the first is the social one of people not being willing to share that much –we’re not going to solve human nature, but I think we will see a continuum where there is much more sharing than occurs at the moment. The second one is that this sharing requires effort and resource and that we don’t have these perfect discovery tools. So a working system will find a balance somewhere between the costs of sharing, the costs of good discovery, and the benefits that accrue.

But that doesn’t really answer your question. Fundamentally I don’t personally see a qualitative difference between sharing my notebook, authoring a paper, or writing up a Wikipedia article. They are all summaries at different levels that are likely to be of interest to different possible audiences in different ways. But the core of the networked viewpoint is to not assume that we know exactly what those audiences are, or the kind of interest there might be in any given output but to be open to idea of the unexpected user and use. If there’s no additional cost then there’s no harm in supporting them. In reality there is a cost, sometimes very small, sometimes a bit larger and we’re going to have to work through the process of deciding when and where that cost is worth this benefit that by definition we can’t quantify up front. What we can do is try to understand what the benefits are at the system level and plan our resource allocation accordingly.

Q: The Bayh-Dole Act gives researchers and institutions full ownership of the intellectual property (IP) derived from federally-funded research. Given this ownership, can a federal agency legally compel a researcher to publicly release that IP in the form of a data mandate? Should the research paper that results from the grant be considered researcher-owned IP as well?

A: I think it is interesting to explore both the technical legal reality here as well as the intention behind Bayh-Dole. The reality is that it vests IP in the institution and places obligations on the institution to optimally exploit that IP. I have actually wondered whether in fact US institutions are actually properly discharging their obligations when they allow authors to sign over copyright to publishers. I am not a lawyer and certainly not a US IP lawyer but I would imagine that the fact that the NIH mandate and existing data policies alongside the developing NSF policies haven’t really raised serious issue about incompatibility with Bayh-Dole means there isn’t really a legal issue. But I don’t pretend to understand all the subtleties.

If we go back to what Bayh-Dole was supposed to achieve I think the answer becomes much clearer. The intention was to ensure that research was appropriately exploited, with an emphasis on commercial exploitation, but my sense is that the intention was to ensure that research got results. In the context of US political philosophy the way to do this was to hand over IP because it meant that the institutions had an interest in maximal exploitation because they got the direct benefit. However there is an interesting question–most institutions have focused on the narrow question of how to optimize IP exploitation on a project-by-project basis. If we take a slightly different view, that of the global optimization of exploitation, the view may become quite different. Is it in the interest of a given institution to pursue IP protection for each and every project? Or would they do better if they gave away the majority of IP to support innovation more generally. In one narrow sense it is easy to argue that institutions have done badly. They still have tech transfer offices in most cases–if they were any good and actually making money they would have spun themselves out.

If we take another couple of steps back and look globally, then there is also good evidence that giving government data away maximizes the economic return on that data. This has been most studied with respect to geographical data but it seems to hold well for most government data and there is no particular reason for that not to extend to research data. So if we assume that governments invest in research to generate (in part) economic returns and it is the role of government to globally optimize those returns then it makes perfect sense for government to mandate open data.

So I’d like to see Bayh-Dole re-interpreted for the 21st century. I think it entirely appropriate that there is an obligation on both researcher and institution to maximize the impact of their work. If one way to do this is to vest IP in the institution, then that’s great. My suspicion is that it is slightly counterproductive in practice. But the principle that there should be a global optimization is a good one. And that should include the economic return on the copyright on researcher-authored papers.

Q: F1000 Research is a new journal that is going to attempt to create something like the continuum you’ve suggested. You served as a consultant in its formation. How does this journal fit with the way you see research being conducted and communicated in the future?

A: I think F1000 Research is a really interesting experiment that takes another step down the road towards pulling apart the different parts of the services that the organizations we call publishers offer. The biggest component of the cash cost of conventional peer reviewed journal publication is the managing of the peer review process (the biggest cost overall is the cost of that peer review but this is a non-cash cost). We also don’t really have good data on whether our conventional processes are any good–or more specifically what they are good for and they are not. At the same time the community is still very wedded to those traditional processes.

The recent history of successful innovation in scholarly communication has been dominated by projects that offer something new enough to be interesting and to offer real advantages but not so different to be unrecognizable. PLoS ONE is a great example of this; recognizably a journal, containing articles, and with what is basically a very conventional peer review process, but with a twist on the selection criteria. Alongside this a lot of attention was paid to the internals to keep costs down, but most of that isn’t immediately obvious from the outside. So while PLoS ONE is a radical shift, it was still within the bounds of what people expected.

Because of PLoS ONE, the world has now shifted a lot. That idea of “publish everything publishable” is a lot more acceptable today than it was five years ago. I see F1000 Research as another step towards what might be a better validation system. It might be a step too far for most of the community, or it might hit a sweet spot and take off. But it’s a valuable experiment. Another effort, which is in some ways similar but a bigger jump, is Figshare, developed by Mark Hahnel. I thought at the time that Figshare was unlikely to get major traction because it did seem like too much of a jump. Kind of preprint or data repository, or micropublications, but it felt to me like it was too different to take off. I think I was probably wrong about that and that suggests to me that the community might be ready for more radical experimentation than we’ve seen up until now.

Q: F1000 Research is a privately owned for-profit venture. Much of the resentment voiced against commercial publishers seems to stem from the profits they generate. Is offering profits for providing services like this an acceptable means of driving progress? Is there a recognizable line that can be drawn between reasonable rewards and exploitation?

A: I think this is a misunderstanding. While there are parts of the OA community who have a strong anti-commercial ethos, the majority of us are fairly economically liberal (in the European sense!) and perfectly happy with people making money. Good value products need to be sustainable and that means they need to generate a reasonable return. We also see good returns as a strong signal that something is sustainable — which is also very important. I also think there are massive neglected business opportunities in the scholarly communications space.

The objection to level of profits that Elsevier in particular are making is subtler. Firstly there is the scale of those profits, regularly and consistently over 35%. In most other markets this would be a signal of market failure, and a recent JISC report has said that there is clear evidence of market failure in this sector. Making 40% in one year is the sign of a company ahead of the curve, but in a functioning market returns usually hover around 5-15% when averaged over time. So there is a real objection to commercial concerns taking large sums of money as a result of effective monopolies and market failure. The second issue is that this money is taken out of the research system — if it were being re-invested in the infrastructure or in improved services then there would be a discussion to be had but it is just being siphoned out to shareholders.

Finally there is a lack of understanding on both sides of what costs really are and what the price could be. The opacity created by some publishers on what their costs and returns are doesn’t help this. It is clear that shoestring journals can run very cheaply and that pre-print repositories are also very cheap. It’s also clear, at the moment at least, that much of the customer base wants more than these basic services but we don’t really have a functioning market. Nor do we have particularly well informed customers making rational choices.

So overall I’m very comfortable with the profit motive being a driver as long as we have a functional market. And that to me means that authors need to be making choices about the services they purchase and how much they are prepared to pay for them, and what they expect to get in return. Others disagree with me, but I think this is the best way to drive real technical innovation where we need it at least in the medium term. But there is one other important point. The vast majority of the resource we are talking about is government money of one sort or another. Subscription publishers often get in a lather about government encroaching on their business model but there never was a business model, just a government subsidy, whether it’s paid through research grants and article processing charges or through libraries. If private companies want to play there and can provide good value then they should do that, but equally they should never forget that this is a market created by government largesse and one that is therefore ultimately subservient to the policy direction of government and the funders.

Enhanced by Zemanta
David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.


36 Thoughts on "An Interview with Cameron Neylon, PLoS' New Director of Advocacy"

I’m puzzled by the comments about Bayh-Dole in relation to copyright transfer because, as far as I’m aware, Bayh-Dole never was addressed at copyrights at all but merely at patents. Also, universities historically have not cared about copyrights and left their management entirely to faculty, whereas they have set up offices to focus management of patents.

The comment about lack of transparency of costs of journal publishing might be answered by suggesting that there is an open source of information about such costs in the journal operations of university presses that are attached to public institutions. Those costs cannot be much different from the costs of privately held companies, with allowances made for subsidies some presses receive on office space, utilities, etc.

My understanding of Bayh-Dole is that it relates to “intellectual property” that results from funding. I don’t know if the question of copyrights of papers that come from that funding has ever been fully addressed. Regardless, it’s a key issue in terms of mandates requiring the release of data from research projects.

I’m not sure how informative reports from university presses will be for determining costs, mostly because the size and scale of such presses rarely rivals that of commercial publishers. A company that publishes 1500 journals has different costs for things than a university press that publishes 6. You’re getting bulk deals on physical goods like paper, on services and likely you’re able to afford your own in-house staff for things like IT, publishing platforms, warehousing and fulfillment, lawyers, etc. The cost savings can be enormous, which is likely why we see so many society journals migrating away from smaller university presses to commercial companies which can offer much lower costs and hence better returns.

Thank you for this — terrific, even-handed stuff. I’d like to see other bloggers here weigh in on Cameron’s response to the “continuum of data sharing” question.

I think he’s right that there’s a continuum of interest and that the research paper is in many ways an artificial construct. But it is an artificial construct with great value and importance. As Cameron notes, there is great need for summarization. The research paper is a highly evolved form that serves that need on the broadest level. To use perhaps a poor analogy, the research paper is the dollar bill of the research economy. Some people need 50 dollar bills, some people need nickels. But most people need some dollar bills for their daily purchases. I see the research paper not as a definitive statement or a complete story, but instead as a marker along a path. We have completed this stage of our research, here’s what we did and here’s what we learned. On to the next stage…

I think Cameron’s personal information needs are reflective of his research field and his position. We all have our biases, our vision of the way science works, based upon our experiences. The information that can be gained from asking “Has someone tried this? How did it go?” varies quite a bit depending on the nature of the research. If the data from a failed experiment is easily analyzed, run through an algorithm and checked, that’s very different from troubleshooting and understanding a complex physical process involving complex reagents and equipment. For many types of research, learning that someone else tried an experiment and failed tells you nothing–did they fail because the experiment is poorly conceived or because of technical incompetence? Does this compound fail to cure cancer, or did you accidentally put too much salt in your buffer? It’s a reason why science requires redundancy and repetition, particularly of failures. Given the enormous differences in types of data and their value for reuse, I’m not sure there are universal solutions or practices that can be applied across all of science.

That said, there are areas where the sharing of data has obvious and immediate value. Given the current funding climate, I’m a less sanguine than Cameron is about the levels of openness we’re likely to see in the near future. Jobs and money are so tight that very few are going to be willing to give up their self-created advantages over competitors until they’ve been fully exploited. I think this is the sort of approach most likely to be successful, trying to get researchers to release datasets after they’re done with them, kind of like requiring deposit of sequence in a database for publication. That’s a much more likely scenario than requiring constant release of work in progress.

One also has to think of ways to incentivize this sharing. Researchers write papers because there are clear career and funding rewards offered.Translating those rewards onto a micro scale for pieces of data is hard to conceptualize. I think Cameron does an excellent job though, of discussing the tradeoffs and balances that come into play. For a researcher, is there more benefit to be gained from spending time cleaning up my raw data for others to use or should I instead perform the next set of my own experiments? In today’s system, the answer is obvious and the cost is perhaps higher than Cameron suggests, given that time and effort are often a researcher’s most precious commodities.

I pretty much agree with Cameron on the services idea (and of course on data sharing and all that). However, instead of waiting yet longer for existing publishers or startups to seize this opportunity, I’m not that patient and think we’ve waited long enough and propose that libraries are in a unique position to provide these services: they already have the know-how and the infrastructure for these tasks and they have, collectively, huge budgets to pay for the services Cameron mentions. If you round current estimates up, you get a world-wide academic publishing market of about 10b ($or €) annually, largely (rounded) paid for by libraries. Given a (again rounded) 40% profit margin of commercial publishers, publishing costs range around 6b annually. Thus, if libraries were to cancel all subscriptions over night, hypothetically, and provide the services Cameron speaks about themselves, they’d have to spend 6b to provide these existing services and have 4b left to invest in new services. Thus, in the long run (as these things don’t happen over night), the tax-payers funding scholarly communication stand to save something rather close to 4b every year (minus the investment in modern services) and get a modern scholarly communication system in return. When have we last seen that: we have the chance of modernizing a core system of the scientific community not by spending but by saving money?
Even if tose estimates (which are not mine) were 75% off, tax-payers still stand to save 1b annually, which aren’t peanuts.

Which services were you thinking that libraries already have the staff capable of supplying? Copyediting? Managing peer review? Selecting journal editors? Marketing? Show me a library that has staff with these professional skills, please.

Libraries digitize their ancient texts, publish our theses and many of our papers in their OA repositories. Some universities even still have university presses. Many libraries are directly involved in researching and developing semantic web technology, Linked Open Data and other crucial technologies many publishers haven’t even heard of but are crucial for a modern way of doing science. Most libraries interoperate and research quite tightly with the computing departments and core data facilities of their universities providing them not only with crucial know-how but also the necessary, top-notch infrastructure. 95% of all peer-review is handled by academic editors anyway, so that’s a complete non-issue as well. Thus, most skills are already present, many of them not even represented in commercial publishers when they should be, and what little is still missing can be easily bought for sums that won’t even show on orders of magnitudes like ‘billions’ or ‘millions’. There’s a *lot* of expertise to be bought for 4b burning a whole in your pocket every year and it’s not all that much that’d be required.

But copyediting? Marketing? You’ve got to be kidding me!

I guess you haven’t read the Ithaka Report (2007) that devoted a whole appendix to the skill sets of libraries and university presses as one way of demonstrating their differences and supporting their recommendation that the two kinds of entities cooperate so as to realize synergistic benefits. As to peer review, there is a whole lot more to the process than just the reports solicited from expert readers. Journals that are not well copyedited are also not well respected. If marketing were not important, why does every publishing house that has a journals department also have marketing staff in that department?

But copyediting? Marketing? You’ve got to be kidding me!

So true. Authors hate it when their articles are readable and mistake free. And no one likes seeing their work reach a wide audience.

I’m not even sure where to start here, other than to say this is an excellent example of the kind of idealism mentioned at the beginning of the blog posting above. It’s an idea interesting to think about in the abstract, but so far removed from reality to have little practical real-world value. The proposal shows a startling level of naivete about how publishing works and how libraries work among other things.

First there’s the notion of cancelling all subscriptions overnight. Subscriptions are increasingly sold as consortia deals these days, and these are often long term deals. The libraries get a low rate locked in for a number of years. “Overnight” in this case may be as long as five years for some journals.

Then there’s the question of the entirety of the published literature extending back over the history of science. This proposal essentially does away with access to anything already published. I’m sure science won’t mind starting over from scratch.

And I’m sure there won’t be any trouble at all with science communication for the years (decades?) it takes to build this new system. People can just sit on their results until it’s in place.

As Sandy Thatcher has pointed out, there are limited ways in which the skillsets of librarians overlap with the many specialists in the publishing field. Librarians also have fulltime jobs already,which apparently they’ll need to neglect in order to become publishers.

Then there’s the magical $6B that’s meant to pay for all this. If a librarian tells their administration they’re cancelling all journal subscriptions, but that the money must remain in the budget for some ill-defined set of services, how likely is it that the money will instead go back into the university’s general fund? Think of a place like the University of California system, which is perpetually strapped for cash and good luck with implementation.

You’ll also create an enormous disparity between institutions willing (and able) to spend heavily on these services and those without the funds (or desire). The rich will get richer and the poor will become second-class scientists.

I’m sure there will be no issues with librarians at an institution being pressured to push through papers from that same institution, even if they fail to pass their reviews. It’s not like they’re part of the same institution and stand to benefit from the success of others at that institution.

And of course there are the societal costs as you’ll be putting tens of thousands, if not hundreds of thousands of people in the publishing and related industries out of work.

And that’s just scratching the surface. Once again, I’ll bring up the appropriate German word here:
Verschlimmbesserung: an attempted improvement that makes things worse than they already are.

Hmm, I don’t really see how these points invalidate anything I’ve said. Point by point:

> “Overnight” in this case may be as long as five years for some journals.

Thanks for putting a more exact time on my “don’t happen over night”.

> Then there’s the question of the entirety of the published literature extending back over the
> history of science. This proposal essentially does away with access to anything already
> published. I’m sure science won’t mind starting over from scratch.

I indeed did not mention anything about back issues. Here’s a potential scenario that might (probably?) require some legal support from the respective legislative bodies.
I know libraries who have digital copies of their back-issues. Combining those from many libraries will go quite a way. Downloading and saving those currently accessible in preparation for the transition will add to that as well as mining collections such as those at Mendeley and other user-based resources. Taken together, this should cover basically everything.
Again, there would probably need to be some legal support or a one-time fee to the then ex-publishers to allow one-time access for transfer.
Given the support required for such a transition to happen anyway, this is clearly not a problem. If such a transition were to happen, there are clearly much bigger problems than that.

> And I’m sure there won’t be any trouble at all with science communication for the years
> (decades?) it takes to build this new system. People can just sit on their results until it’s in place.

Which is precisely why it “won’t happen over night”. Imagine if about 20 KIT-sized libraries were able to cut subscriptions to their ten most expensive journals (theoretically) for ten years. Hardly anybody would notice any reductio in access by just cutting ten journals. However, if these 20 libraries cooperated, they’d have 2.6m EUR every single year to invest in the necessary services and applications for the transition which would happen, say, 5 years later, a total budget of 13m EUR. Tiny initiative, barely noticeable decline in access, huge potential for development. Five years later, we’d have much of the necessary prerequisites to ditch publichers on a much larger scale.

> As Sandy Thatcher has pointed out, there are limited ways in which the skillsets of librarians
> overlap with the many specialists in the publishing field. Librarians also have fulltime jobs
> already,which apparently they’ll need to neglect in order to become publishers.

Of course not – who/what are the 6b currently paying? Why wouldn’t these people (or their replacements) work for libraries? If I claimed libraries would be doing this with their current staff, they’d of course be saving much more than 4b, it’d be close to 10b. I thought that was quite obvious, guess it wasn’t.

> Then there’s the magical $6B that’s meant to pay for all this. If a librarian tells their administration
> they’re cancelling all journal subscriptions, but that the money must remain in the budget for
> some ill-defined set of services, how likely is it that the money will instead go back into the
> university’s general fund? Think of a place like the University of California system, which is
> perpetually strapped for cash and good luck with implementation.

Who ever said the libraries would do this in secret without letting anybody know about their evil plan to take over the commercial publishing business? Clealry, of the many OA supporters plenty are sitting in the committees for the library budgets (I personally know quite a few) and given the amount of frustration among faculty with the current system, I wouldn’t be surprised if some budgets might not even (temporarily!) increase in support of the transition to something so superior. Thus, quite the opposite of what you claim.

> You’ll also create an enormous disparity between institutions willing (and able) to spend
> heavily on these services and those without the funds (or desire). The rich will get richer and
> the poor will become second-class scientists.

Are you serious? You’re talking/writing to someone who’s library can’t subscribe to Cell or Neuron (Cell Press, Elsevier) and thus uses #icanhazpdf on Twitter (or the equivalent on other social media) regularly (as well as other less legal ways to obtain access) and you seriously want to tell me about poor and rich libraries? You’ve got to be kidding me!
If PLoS can provide waivers so will a library-based system – that’s an absolute non-issue

> I’m sure there will be no issues with librarians at an institution being pressured to push through
> papers from that same institution, even if they fail to pass their reviews. It’s not like they’re part
> of the same institution and stand to benefit from the success of others at that institution.

Why would these kinds of issues be any different from the way it is now? In fact, a librarian may be less inclined to cave in to any pressure as he’s not dependent on the financial success of his library as professional editors are for their journals. You remind me that there may be an additional facet to the high correlation between IF and retraction rate: eager professional editors looking for that surprising, headline-making discovery to keep the bucks rolling in…
Thus, there are always biases and with a library-based system they’d be at worst equal and at best much improved compared to now.

> And of course there are the societal costs as you’ll be putting tens of thousands, if not
> hundreds of thousands of people in the publishing and related industries out of work.

Again, a canard that has already been debunked (look for Heather Pivowar’s post on this, IIRC). Nobody or only very few would lose their jobs. They may have to change employer, but that’s it.

Thus, in conclusion, your points are either already debunked, confirmations of what I said or non-issues. I’m sure you can do better than that, my proposal is certainly not without problems and issues, that’s why I don’t actually think it’ll happen, but your’s aren’t among them.

And what do you do about all the businesses, like JSTOR, Project Muse, etc., that already have digitized back issues of thousands of journals? No legislature is going to wipe them away with a flick of the wrist. And remember that libraries looked to Google to do digitization on a scale that academic libraries couldn’t begin to match.

Your assumptions about faculty attitudes are belied by studies like the ones that have surveyed faculty in the California system and found that they are, mostly, quite content with the system as it is and have no pressing need to change it?

Why do you think libraries at one institution are going to subsidize the work of individuals at other institutions? I can’t tell if you’re talking about all libraries bonding together to form one Voltron-like service bureau or if you’re relying on each to individually see to the needs of its own institution. If the former, you’re going to have a hard time explaining to administrators at big university X why they have to contribute 10X more than the library at university Y just because they initially had a bigger serials budget. If the latter, then you run into the inequities and the biases I mentioned. And trying to build a system to organize and allocate funds for all of these services to all of these institutions for every possible field of research is at best a Herculean task, and more likely an impossibility.

The first statement was more in response to your suggestion that “if libraries were to cancel all subscriptions over night”.

Back Issues: ah, so all we’ll need to do is either turn librarians into criminals, or completely rewrite centuries of copyright law and international treaties. Piece of cake. No one will object to a massive governmental property grab like this. Given that governments are continually trying to increase protections on copyrights (SOPA, PIPA, ACTA, CISPA, etc.) what makes you think they’ll suddenly do a 180 and eliminate intellectual property rights, give up the tax revenue they generate (not to mention the lobbying and campaign contributions)?

Current Communications: so it will take top universities 5 years to reach a point where they can effectively communicate researcher results. So their researchers are out of the game for 5 years. Then each subsequent university has to face the same transition, most without the same level of funding.

Library budgets: see above, there’s nothing secret about it. If you give money back to an administration that’s facing a $800 million shortfall, you’re going to have a hard time getting it back from them. Remember that many of those current evil publishers making those evil profits are owned by universities. If they’re already looking to turn a buck on scholarly communication, what makes you think they’ll be willing to stop? And this brings up the question of why you’d turn things over to a library when you already have a functioning publishing house on campus? Talk about reinventing the wheel.

Furthermore, If faculty advocates have such a huge influence on library budgets, why are they failing to keep up with inflation? Why are those budgets continually being cut? How about putting some of that influence toward valuing libraries to begin with?

Poor/Rich disparity: again, if a united effort, how will you convince some to pay more than others? If individual libraries are providing services, then researchers at rich schools will be at an advantage over those at poor schools. Right now you have the ability to publish in the absolute top journals, no matter where your lab is. Under your system, you get as much service for your paper as your library can afford.

Biases: Remember all the outcry about worries that eLife was going to become a house organ for HHMI/Wellcome? What you’re proposing is exactly that. Instead of journals acting as neutral third parties, you’re creating a system of boosterism and marketing for a university, run by that same university. So much for objective judgements. If an editor at a current journal has a conflict of interest, they’re supposed to recuse themselves from working on that paper. Under your system, every paper contains a conflict of interest.

Jobs: so you’re going to take every single employee of every single publisher and publishing-related company and put them on the payroll of university libraries? Let’s not even get into relocation costs, it seems to me that the salaries alone are going to eat up the majority of your proposed savings.

Again, an interesting idea in the abstract, completely unworkable on multiple levels in reality. Even if one struggled through the massive bureaucracy and ethical conflicts it would create, I’m not convinced you’d end up with anything better than we have now. You’d just be shifting everything from publishers to libraries, spending enormous amounts of money trying to recreate what already exists, creating enormous new problems, violating international law, hurting the economy, and setting science back years if not decades.

All to accomplish exactly what?

That’s more like it, thank you. I’m in a hurry so will abstain from commenting on the trivial non-issues, sorry, especially those bordering on insulting me by insinuating I’m more than a little dim.

> I can’t tell if you’re talking about all libraries bonding together to form one Voltron-like service
> bureau or if you’re relying on each to individually see to the needs of its own institution.

In my view, libraries would share archiving via a set of standards achieving a similar functionality to peer-to-peer sharing. Costs would be allocated via usage share. Each library would cater to their own faculty, but load and thus costs would be shared according to usage shares (this could include quantifying both depositing and accessing, of course). IIRC, there are models for this already in use. Such agreements would be subject to regular updates, of course, as is standard practice.

Thus, neither herculean nor impossible as it does already exist.

> Given that governments are continually trying to increase protections on copyrights (SOPA,
> PIPA, ACTA, CISPA, etc.) what makes you think they’ll suddenly do a 180 and eliminate
> intellectual property rights, give up the tax revenue they generate (not to mention the lobbying
> and campaign contributions)?

Two things. 1) The same pressure that always keeps these bills from passing, 2) The prospect of saving 4b in subsidies every year.
This means, it won’t work if the publishers spend more than 4b per year on lobbying, I grant you that.

Finally, let me correct you on your conclusion:

> You’d just be shifting everything from publishers to libraries, spending enormous amounts of
> money

No, saving eventually 4b annually, starting with just a few million at a time.

> trying to recreate what already exists,

Yes, by modernizing it from a horse-drawn carriage to a Tesla Roadster.

> creating enormous new problems,

No, eliminating many current problems (of course creating new ones, as always, but those are minor compared ot the completely irrational, unsustainable and anti-scientific mess we have now). In fact, what we have now is so bad, that it’s hard to come up with something that’s actually worse.

> violating international law

Agreed, that will be an issue: dedicated legislation for publicly funded research has to be written. This would not be the first time in history, though.

>, hurting the economy

No, incentivizing a new economy
Or: Yes, getting rid of government subsidies at approx. 4b/annum.

Pick your answer.

>, and setting science back years if not decades.

Smooth transition with probably some, but barely noticeable hickups along the way.

Am off to present our research in Montreal! I still think there are bigger problems than those you mention, but the fact that you’re not bringing them up makes me somewhat more optimistic that we will be able to overcome them.

I agree strongly with Björn on this issue and recently addressed the UKSG publishing conference with an extremely similar proposal. Furthermore, the hostility and sneering cynicism with which this thread has responded to this suggestion is astounding and hardly conducive to a debate. Hardly “turning away from the shouting”.

Yes, libraries are not currently placed to take on the entire job of the academic publishing industry. However, libraries and institutions are certainly not currently placed to continue to pay subscription costs. Libraries also face the threat of third-party service-provision incursion into their role. They too face uncertain times in a world devoid of physical collections. By cementing the library as a bi-directional research team, co-ordinating with other institutions for review, hiring staff from superseded subscription fees with expertise in other areas, the savings that Björn outlines are feasible.

On the subject of back-issues: many libraries have already implemented systems such as LOCKSS and pre-negotiated the right to own, in perpetuity, those issues for which they have paid. These can then be phased out.

Finally, as Björn mentions, there *are* problems with this model that would need to be addressed, but as a second-best measure, or regulative threat, this model has merit. Publisher profits are extortionate and cannot carry on. It is not academics and their under-funded libraries who are being “idealistic”, but publishers who have their heads in the clouds. I don’t feel that author-pays OA solves this financial problem, merely displaces it to a different area of the funding cycle and reduces research output. A different solution is needed, even if only to regulate publisher behaviour, but when one comes along it just gets shouted down.

Martin, you’re right about the tone and I do apologize. There’s an inherent level of frustration that’s built up over the years in responding to far-fetched proposals that make no practical sense. What exactly is the point of this plan? As far as I can tell, the key goal is to get rid of publishers. It doesn’t offer a new, better system, just one that changes hands, and asks that the entire system be rebuilt from scratch and requires a complete overhaul of every nation on earth’s intellectual property laws.

If, as you state, the goal is to lower publisher profits, I have a much simpler, more practical solution for you: Stop publishing in commercially-owned journals. Publish only in journals owned by the research community itself (research societies, institutions and university presses). Since these journals are all owned by the community, it is the community’s will that determines their pricing and all of their policies. If the community demands lower prices or different access models, the community controls the means to put them in place.

This doesn’t require anything new to be built, though likely you’d want to start some new university presses to replace commercial outlets, or new journals through existing presses. The biggest issue is asking researchers to act against their own interests by not publishing in some of the top journals that contribute most to their career advancement.

No complete rewrites of entire legal systems necessary. The downside though, is it would permit “publishers” to continue to exist, though realistically, we could all just call ourselves “librarians” which might go a long way to resolving some of the anger issues.

Thank you for another thought-provoking article and to Cameron. May I just address one specific point in his final answer: he says “money is taken out of the research system”, “it is just being siphoned out to shareholders”: this is untrue, society-publisher partnerships mean that a highly significant fraction of journal revenues are returned to science via society royalties. (I am a publisher for Elsevier and am finalising many of these payments right now.)

“Society-publisher partnerships mean that a highly significant fraction of journal revenues are returned to science via society royalties.”

Hi, Andrew. I’ve often heard this claimed in general terms, but I’ve yet to come across any percentages. I realise of course that you can’t discuss the finances of specific journals, but do you have a sense of what range %revenue tends to find its way back to societies?

A rule of thumb is that if the publishing company is doing the lion’s share of the work, the society gets less back. So, if the publishing company is doing copy editing, print and online production, marketing, sales, hosting, author relations, managing peer-review, etc., the society gets a lower percentage. So it really depends on what work the society wants to offload and which risks the society wants to shield itself from.

That makes sense, but are we talking about societies typically getting in the region of 10-20%, or 40-80%, or 1-2%? You know, just ballpark.

Above 40% and below 80%, probably in a bell-shaped curve. I think that’s a fair portrayal, but I only have indirect experience having always been at independent not-for-profits. But in bidding I’ve seen, that’s about where it can end up, with also a signing bonus to the society at the beginning. Unless you’re a society really devoted to publishing — or really good at it — these deals can be pretty enticing.

Thanks, Kent, that’s useful. Also, a pleasantly surprise — I’d imagined a rather lower figure.

Thanks to David for the chance to speak to this audience and also to Andrew for the question. I think you’re missing my point, and in quite an interesting way. The issue around taking money out of the system is that of the profits that leave the sector entirely. I would see Elsevier as part of our scholarly system, albeit one I don’t always see eye to eye with. The objection is not Elsevier taking money per se, it is that a billion dollars of profit leave the sector entirely and get distributed to shareholders. If Elsevier re-invested that money, either back into research or into development then I think the conversation would be quite different.

This issue of money leaving the so-called sector has nothing to do with open access. It is part of the for-profit versus not for profit issue, which is largely moot. Profit is the return to people for the use of their money. If a non profit borrows money instead, the interest and principle payments also leave the system, going instead to the lenders. For that matter so does the rent, phone, electricity, computer cost, wages and every other purchase or expense. People who make these sorts of arguments seem not to understand capitalism and capital flows generally. Capital must be returned, and paid for. It cannot go into R&D. In fact it is my understanding that most of Elsevier’s so-called profit actually goes to bond holders, which means it is actually interest payments on borrowed money. I think the return to shareholders is just competitive with other industries, which it has to be for survival.

Of course the alternative is to nationalize the industry, which is what OA proposals sometimes look like, a government takeover. Then the capital comes from the taxpayers. It has to come from somewhere.

I do not see any US Federal funders asking Congress for the large amounts of money it would take to transfer to an author pays industry. Thus the tension he speaks of does not seem to exist. NIH is not proposing to scrap Pubmed Central in favor of author pays. The transition may occur for market reasons, but government action seems unlikely, which is good.

“My sense is that organizations like Elsevier, Wiley, and the American Chemical Society have so much institutional inertia built in that it is very difficult for them to even think about the kinds of change required to adapt to this new world. There has been next to no innovation around business models from these players; all of that has come from the new players, largely BMC, PLoS, and Hindawi.”

I have to disagree with this statement in particular. “My sense is . . .” suggests that this is highly impressionistic, and actual experience speaking with people at these companies and in some cases participating in strategic discussions with them offers evidence of the opposite. All three of the companies Neylon mentions have made subtle, smart, and effective changes to their business models over the past 10 years, and have consistently implemented innovation. And this is not to celebrate them alone. Nearly all scholarly publishers that are doing well today have instituted progressive innovations over the past decade. It’s almost arrogant to state that BMC, PLoS, and Hindawi are more innovative. While it’s true they trumpet open access endlessly as “innovative,” it’s an old innovation that Neylon, in his new role, is paid to pump up. That doesn’t make it unique or superior.

One could just as easily argue that open access has achieved community inertia, as it has changed its model very little over the past decade. The fact that we’re in a cycle of repeating the same old back-and-forth — complete with boycotts — only underscores the inertial effect OA advocates have exerted. I think we’d all be better served if we agreed that it’s one way of doing things, not the only way, and move on. Who knows what innovation OA inertia is holding back?

Kent, I have little interest in engaging in a argument based on ad hominem attacks. I am interested in talking about the details of business model innovations you describe. It is true that that two major innovations in open access business models, first Author Payment Charges, and second large scale journals are now about ten and five years old respectively and that we have seen a period of consolidation. It is a valid question how we can enable more and easier innovation in this space.

There is a new wave coming of ultra-low cost infrastructure, based for instance on LaTex pipelines and WordPress, that is widely accessible and this will be interesting to watch – I don’t think the numbers or the ease of use add up yet, but for small community journals I can see the shoestring (or community time donation if you prefer) approach being a very attractive model in a few years. This already works in some places so it will be interesting to see how far it can spread.

The other big innovation, although I grant it possibly doesn’t really count yet as a “business model”, perhaps more an infrastructure model, is eLife. The fact that funders have got to the point where they just want to cut out the middleman and fund the communications infrastructure directly is a big change. Broadly speaking they already fund it indirectly – so why not take direct control? I don’t believe this is the long term approach that the funders will take with eLife but there is not question they could afford to do so if they chose to and that raises some interesting questions.

On the other side we have DeepDyve, which is interesting, although I think deeply flawed as a value offering. SciVerse and the shift that it represents in focus for Elsevier is also interesting. I will also be interested to see what NPG does now that Phil Campbell has said that a major shift towards OA is inevitable.

Author-pays is not a particularly novel innovation, and not a particularly good one for a variety of reasons; in fact, it wasn’t sustainable until the bulk publishing/mega-journal model was brought to life. Open access had to evolve to survive, and mega-journals have been OA publishers’ salvation. Without it, PLoS may be really struggling now. Instead, it’s turning profits many publishers of any stripe can’t match. This “innovation” is interesting, because while small subscription publishers can exist (I run one), I don’t know of a niche OA publisher that lasts long. OA publishers have to go big or go home. It’s become axiomatic to the model, I believe.

As for technology, it’s nice that some fields can use inexpensive platforms to run low-cost journals. But not all can, and for most journals in most disciplines, technology is a line item, and not the biggest. There’s a lot more to publishing than technology. JMLR is one such example. While it runs cheaply, it lost its non-profit status because it didn’t have enough infrastructure in place to attend to IRS filings for three consecutive years. No amount of WordPress or LaTex is going to solve that administrative problem.

As for eLife, I think it’s naive to call what it’s proposing an “innovation.” Who will deal with objective editorial decision-making when the sponsor runs the outlet? Who will investigate malfeasance? What reader will trust that this is anything more than the equivalent of monographs in the gift shop of three funding bodies? Do you really think money doesn’t buy predictability and leverage? When readers pay, they have the leverage. When funders pay . . .

Innovation is a word used to blanket a lot of things. Inherently conflicted publishing models, commercial models that can’t thrive unless they are heavily skewed toward acceptance and bulk publishing, and stances toward the future that only set us all on a race to the bottom while sponsors, universities, corporations, and governments get bigger and more capable hardly amounts to a set of innovations I want to embrace.

“I’ve argued often that the traditional mode of filtering, pre-publication, by blocking the appearance of some works in specific channels, doesn’t really add any value, or at least doesn’t provide a good return on investment. But we clearly can’t abandon filtering. It is at the core of coping with the information abundance.” Neylon

A couple of points here: I hope that I am not taking Neylon’s comments out of context, but there appears to be a self-contradiction here. The “traditional mode of filtering … doesn’t really add any value”, but filtering cannot be abandoned. So, the filtering done by journals (OA and non-OA) does add some value. Also, Neylon assigns the value of filtering to “coping with information abundance.” I believe that, historically, its value also lies in “quality assurance.” The tsunami of information would likely strain any filtering process, and the rise in retractions can be interpreted not so much as evidence of systemic failure but rather as evidence that scholarly publishers working hand in hand with the academics running and using today’s journals recognize the critical issue of quality and are addressing it. “Crowdsourcing” may have some promise to help improve the filtering process, and it is not just start-ups who are capable of doing this. Even if one argues that the Mendeleys out there can be better or best at it, the OA model does not follow from that argument.

“The other big area where there are massive opportunities is to sort out the back end. Getting our current literature properly organized and properly searchable would be a big step, as would be thinking about collecting and indexing other kinds of research outputs.” Neylon

I may be misunderstanding the task, but there are commercial entities who are already addressing these opportunities through continuous investment and innovation. One need only look at ISI/ThomsonReuters, Scopus/SCImago/Elsevier, Google Scholar Analytics, or Blackbaud. So these are not new opportunities Neylon has identified.

At least one of those four the Harvard faculty and Neylon criticize for excessive profits, and Neylon and JISC go further to claim that a 35%-40% margin is a sign of market failure. The 5%-15% that they prefer sounds very much like the cost+ metric applied by UK government procurement officers. The economic perspective of JISC and UK government procurement is hardly that of someone holding a “liberal economic” view. David Crotty has addressed this point tangentially and more vigorously in his exchange with Martin above, so I won’t comment further except to point out that the market failure with which faculties and government should be concerned is more likely the college debt “bubble” that The Economist covered last year in April. And no, the cost of journals, textbooks and libraries barely move the needle in the underlying analysis of that bubble.

If any of the above comes across as shouting down OA proponents, apologies. The OA model — especially as it might apply to monographs in the humanities and social sciences — interests me a great deal, and I suspect that where the model can be helpfully applied in smaller and less wealthier disciplines, it might have something to offer beyond a shouting match and a lobbying fight over regulation.

Bob, thanks for the comments. In terms of filtering there are two separate issues and I agree that neither are directly related to OA. The first is the social question – are research communities ready for radical changes to the peer review process? To which the answer is clearly no. For those of us who believe radical change is needed we clearly have to make the case better and take the time to gather more evidence and demonstration proofs that other systems can work.

The second is the technical question, what forms of filtering or quality assurance might be more efficient, useful, and cost effective? For me the core point to this question is to realise that we need to start with the “for what purpose” question. So our traditional systems provide a form of filtering and QA for a specific purpose – to help researchers decide what articles to read to inform their scientific thinking – but just at first order there are many other reasons for wanting to access research information. So in broad terms I don’t think I’m being contradictory, just pragmatic. I can see short term gains from applying different filtering systems, and long term gains arising from radical change that puts the end user at the heart of the filtering process, not some middle-man who may or may not know what any specific end user needs or wants. But pragmatically we need to get there from here, and make the case as we go. Its a question of the “adjacent possible” if you like.

In terms of target economic metrics I didn’t introduce those figures as a target – more in response to previous discussions where others have demanded to know what a “reasonable” profit/surplus would be. For me the answer is any profit which shows that the market is functional and doesn’t show characteristics of market failure. I’m not an economist but my understanding is that there are a set of characteristics of market failure, and that the JISC report finds that several of them are present in the scholarly communications industry. In saying that I’m an “economic liberal” what I meant is that I favour a functioning market as the best route towards driving value and finding a fair price for the services in the sector. That doesn’t require regulation at the level of profit margins – but it might require changes or regulation that prevent or break up existing monopolies and monopsonies.

Finally as regards the back end. TR deal with metadata, but haven’t yet managed to make progress on data citation for instance, Scopus is doing some interesting stuff with search but the actual technical infrastructure on the document store is a bit frightful. I am sure what Google does is interesting but we can’t use that as infrastructure because they won’t provide an API. In my view we need to actually re-imagine the underlying object store for scholarly literature. An XML-first approach could take us in this direction – but few publishers, and no large ones, are running a true XML-first pipeline as far as I am aware (some say they are, indeed some think they are even but they’re not really). Separation of parts would make a whole new range of search and discovery tools (and indeed granular quality assurance approaches) possible. But its a hard step because it involves making big changes to underlying infrastructure and perhaps also to legacy content.

Comments are closed.