How much does open access publishing cost an institution? “Almost nothing,” according to computer scientist and open access advocate, Stuart Shieber. The reason? Most open access publishing funds have sat unused.
Writing in his blog, The Occasional Pamphlet, Shieber provides the number of open access articles paid for by institutions participating in the Compact for Open Access Publishing Equity (or COPE), and the numbers are dismally low. Since inception, nearly one year ago, Columbia and Cornell have sponsored two and three articles, respectively. Dartmouth and Harvard have sponsored just one article, and MIT and the Memorial Sloan-Kettering Cancer Center both stand at zero.
Two institutions are exceptions to this trend — Berkeley (92) and Ottawa (25) — both of which will cover articles published in hybrid journals (subscription journals that permit authors to make their articles freely accessible). The other COPE signatory institutions restrict their funds to full open access journals.
The results of these findings are not that surprising. Most faculty do not consider open access publishing high on their academic priority list (see the Berkeley report and the Ithaka report), and those who do often have access to their own research funds to support this type of publishing. What is surprising is how Shieber spins the results: Limited use is a success — not a failure — and the venture is cheap if viewed from a cost-per-faculty vantage point. As he concludes:
The bottom line is that the direct costs of running a COPE-compliant open-access fund are trivial, and the administrative costs of dealing with handfuls of requests are trivial as well.
There is something very odd about this way of thinking. Public service programs are often considered failures when they sit on large sums of unspent money. Unused funds signal a conspicuous lack of demand, a sign that a service wasn’t required and that the money could have been better spent another way. Most funders require unspent funds to be returned, and this act often justifies a budget reduction in the next fiscal year.
In the case of Cornell’s Open Access Publication Fund, $50,000 could have been used to avert the cancellation of journals and databases, could have purchased hundreds of books, or could have been used to support a fledgling service where there was a strong indication of demand. This $50,000 was money diverted from the funds of selectors to purchase library materials and services, then left to sit, unused, in a special account where it benefited no one.
Yet using a very strange form of accounting, Sheiber argues that these few publication expenditures represents peanuts when calculated as a cost-per-faculty basis. Cornell, for example, spent only $3.08 per faculty member and Harvard only $1. Sounds like a good deal, right?
Sheiber comes up with these figures by multiplying the number of sponsored articles by $1,500 (his estimate for the author-side publication costs), adjusts for the amount of time that the program has been in place to come up with an annual figure, and then divides the sum by the number of university faculty to come up with an annual cost per faculty number.
What’s wrong with this measurement? Lots.
First, the real cost to Cornell for its open access publication fund is $50,000. If only three faculty used this fund, the cost of this fund per faculty was $50,000/3 or $16,667 per faculty member. It makes little sense to divide total costs into the entire faculty since the entire faculty were not beneficiaries of this fund. Similarly, the $45,500 left in the Cornell COPE account should not be ignored. As I mentioned, these were funds that could have gone to purchase materials and services for the Cornell community. This fund represents unrealized capital, and by hording it, it created harm (not benefit) to the Cornell faculty.
The Compact for Open Access Publishing Equity has been promoted as an “experiment” in scholarly communication, and yet this experiment appears to be immune from any form of reasonable evaluation. For instance, the data show strong support for the hybrid-OA publication program offered by Berkeley and Ottawa, yet Shieber is unwilling to cede an inch to a transitional hybrid model, calling it “double-dipping.”
There is much to learn from experiments such as COPE. We should treat them as learning experiences that can help us guide future policy, and not as pet projects that require twists of logic and poor accounting to justify their existence.