This is a blog post that will please no one. That is not the intention; I am not writing it to pick fights. But the topic is open access (OA), and on this topic, fights inevitably erupt; it is scholarly communications’ equivalent of the Culture Wars. For my part, I stand with Voltaire: The perfect is the enemy of the good. Already in the background I can hear advocates of perfection beginning to sharpen their swords.
So, without reference to the many arguments on all sides of the matter, How can we make OA work?
For starters, we need a software platform. Initially this need not be complex, but it must scale to handle large amounts of material, and that material cannot be restricted to text or PDF files. The platform might be a huge repository. Or it may be several repositories, all of which are indexed by various means, including the most important one today, Google Web Search. Qualified researchers may upload their work (which we still call “papers”) to this repository. We have models for such repositories now at many universities, but the scale suggests another model, the huge “cloud computing” services of companies such as Google, Scribd, and Amazon, among others. It is the miracle of Moore’s Law that makes these data centers possible at little or no cost to the user. Consider for a moment that Google now permits a user to upload any document for online storage to the Google Docs application — for free. That’s the kind of platform we require.
Second, unless we are willing to have the repository filled with junk, spam, and reckless outpourings, we need some way to filter papers that are uploaded. In traditional publishing, the answer to this is simple: have editors, including peer reviewers, assess papers prior to publication. But this is costly and time-consuming; indeed, one of the reasons arXiv came into existence was to speed up the process. Whatever the many merits of peer review, such review prior to publication may slow down the dissemination of ideas, and speed should be one of the goals of any OA service.
Thus, how to determine what can be posted and what cannot? The answer is to formalize some of the policies that are now in place at many major universities, policies that I call “provostial publishing.” Unlike traditional publishing, where editors review each paper for publication, provostial publishing is a means to determine which authors can post to the repository. The requirement: the author must be affiliated with or sponsored by an established institution. Thus both Harvard and MIT have mandated that faculty deposit their papers into an OA repository. No one is reviewing those papers beforehand; it’s enough that the authors have achieved a position on the faculty. Whereas editors select papers, provosts select authors.
Provostial publishing is a means to assert a baseline level of quality control for what would otherwise be open to massive abuse and “data dumping.” We want OA to be open, but we don’t want it to be foolish. So, for example, I have experience in the world of publishing and digital media and may be permitted to deposit papers in a repository in those areas. (This, by the way, is precisely how Scholarly Kitchen operates.) But suppose I were to hanker, say, to present my grand theory of cognitive science. I have no credentials in the field, no doctorate, no research record. My paradigm-busting paper on cognitive science would not have the blessing of a provost or other sponsor and would thus not be entitled to a place on the repository’s servers. Similarly, a cognitive scientist with no experience in publishing, who has not gone through the years of apprenticeship, would not have access to deposit documents in a repository dedicated to publishing matters. This is not a free speech issue. The Web abounds in venues, but we needn’t open all services to anyone who comes along.
A large collection of papers openly available to anyone to read creates its own set of problems: Which papers are worth paying attention to? After all, not all of the provost’s picks get it right 100% of the time. Our OA service needs a form of post-publication peer review. And here we are quite fortunate, as the current crop of Web 2.0 services presents plentiful models for online commentary. Professor Jones, who is a member of the faculty of Ultimate U., deposits her paper, “Specific Qualities of the Generic,” in an OA repository. Indexed by Google and other services, the paper quickly comes to the attention of people working in the field, who post their critiques alongside Jones’s document. The software enables comments and comments on comments; the paper lives with its commentary all around it — in one virtual place, for the convenience of all researchers.
It will be noted that some papers will attract a great deal of commentary and other papers none at all. This is as it should be. Papers with important information will be cited repeatedly, which in turn will give them higher search engine ranking and bring other readers to them (a process known as “the law of increasing returns“).
Our OA service has thus put an end to one of the most inefficient aspects of traditional publishing: whether a paper is good or bad, it costs the same amount of money to put it through the editorial review process. Post-publication peer review aligns effort and cost with the quality of the material.
This leads us to the economics of our new OA service. By switching from pre-publication peer review to post-publication peer review — and placing a big bet on the utility of search engines — we have shifted the large, ongoing costs of managing a publishing operation to a one-time investment in the software platform, which enables the deposit and review of papers. Some current OA services charge large fees to authors, but this is because they are clinging to the editorial model of traditional publishing. The combination of Provostial Publishing, cloud repositories, and post-publication peer review drives the cost of scholarly communications down and down. Recall that you can upload anything to Google Docs. If the platform is properly designed, the marginal cost of adding new papers and commentary approximates zero.
To finance this service (putting aside the start-up costs), an author would pay to have his or her paper deposited in the OA repository. We don’t know how much to charge, but we know the formula: the number of papers multiplied by the deposit fee per paper must exceed the ongoing operating costs. If those operating costs are, say, $1 million a year (it is simply astounding to see how little it costs to operate cloud-computing services once the underlying platform is built) and we anticipate that 1,000 papers will be deposited each year, the minimum cost to each author would be $1,000. If we forecast 10 million papers, an author would be charged $.10. It may be that we will find that the administrative cost to collect such small sums hardly make it worth the effort. Perhaps upon being granted an advanced degree, a prospective author would simply write a check for $50 for lifetime deposit fees, subject always to the sponsorship of a provost or provost’s proxy.
One policy that I would strongly urge upon any OA service is to invite commercial exploitation, provided that such commerce does not require that any of the research material go behind a pay wall. This recommendation runs counter to the common practice of stipulating that there will be no commercial use of the material. Capital, however, makes things happen. We cannot tell in advance what new services entrepreneurs will come up with, but we are likely to find exciting new add-on capabilities for the OA content. It is one thing to insist that all research content be OA, quite another to banish financial incentives altogether. The scholarly community would benefit from harnessing the profit motive to the aims of researchers.
What’s not to like about this model, which I contend could become economically sustainable in a short time? Or I could say, What’s to like? This model incorporates many of the innovations of services such as arXiv, PLOS One, DSpace, and BMC, but it does not deliver everything that we expect when we turn to an established journal. The key to this model is to substitute information technology for human-mediated editorial activity and the investments in branding that go with it. Perhaps the trade-offs are too great. On the other hand, this kind of development may be inevitable, at least in part, for some disciplines. Once established, such services may go through a process of continual improvement. If they don’t satisfy the needs of the research community, they will disappear.
And then it’s time for the next experiment.