Publishers invest and take risks on behalf of authors. You are not a publisher because you press a button labeled “Publish” on WordPress or Facebook — WordPress and Facebook are your publishers, while you are their contributing author. WordPress and Facebook are taking the risk. You become a publisher when you’re willing to countenance a significant set of financial risks on behalf of authors.
One of the risks scholarly journal publishers take is to set up and maintain article processing and peer review systems without any guarantee the investments will pay off. But one journal’s approach to peer review is not the same as another’s, despite some comparable systems and conflated vocabulary. This means their costs aren’t the same, and the levels of risk aren’t the same. Some approaches are very complex and multi-faceted (riskier) while others are more basic and easier to manage (less risky). As you’d expect, the more review you do, and the more selective you are, the more peer review costs.
The 2012 PEER study found that the cost of peer review in an environment of Green OA was approximately $250 per paper — but with interesting caveats. The report states that:
[the a]verage cost $250 per manuscript for salary and fees only, excludes overheads – infrastructure, systems etc. and is heavily affected by rejection rates.
In the full economic analysis from PEER, released last January, the authors also note that they found:
[n]o indication of significant economies of scale may be traced at the editorial level, except for submission tracking.
In other words, the more editorial work you do and the more selective you are, the more expensive the process becomes:
Such costs correlate with the rejection rate of the journals, to the number of reviewers per manuscript and to the number of rounds of review.
In a recent article analyzing the costs of publication in Nature News, expenses for publication in two OA journals peg the cost at around US$300.
Most of these approaches to peer review assume the most basic model — papers sent outside to two referees, who rate the papers, suggest improvements, and recommend publication or rejection. The standards tend to eschew criteria like novelty or importance, and focus instead on equally amorphous concepts like “soundness.” These basic models are popular because many OA publishers are attempting to be the low-price leader. As Peter Binfield of PeerJ is quoted in the Nature News article:
The costs of research publishing can be much lower than people think.
Robert Kiley of Wellcome Trust and a major force behind eLife is quoted in the new book, “The Future of Scholarly Communication” (2013; Facet Publishing), saying:
eLife is also seeking to develop an editorial process that reduces revision cycles and accelerates the publication of new findings.
F1000 Research takes it to its extreme, perhaps, by accepting payment before any peer review has occurred, and removing the possibility of having any leverage to demand revisions if reviews find problems with a paper.
My analysis of PLoS expenses puts the cost of peer review for PLoS ONE at US$200 or less. If the conclusions from PEER are correct, these lower costs are not the result of scale or efficiency, but the result of simply doing less review.
Costs can be lower if you leave out major swaths of the peer review process which most academics assume are part of any strong peer review system, or if you don’t perform iterative reviews. According to data assembled by Kaufman-Wills-Fustig from across a number of editorial salary surveys, an editorial office should run expenses that amount to about 15% of revenues. This seems to be the case for PLoS ONE once you do the math based on APCs. Therefore, PLoS ONE’s APCs are low in part because they do less review. More review costs more.
What investments can make peer review more expensive?
In addition to the overheads mentioned in the PEER study but not included in their cost estimates (anti-plagiarism software, disclosure monitoring and enforcement, blinding and unblinding of manuscripts by staff, handling questions and disputes, and the necessary management staff), more complex peer review involves staff editors and an editor-in-chief, positions that can be expensive to establish and maintain, but which deliver many benefits, including clear field definition, institutional memory, obvious accountability, and editorial leadership.
Recent comparisons of the costs of producing journals by Richard Van Noorden broke journals into print + online, online-only subscription, and online-only open access (OA). I believe a more meaningful cost comparison would revolve around the resources deployed to accomplish peer review — namely, does the journal have a significant investment in a dedicated editorial team, including an editor-in-chief?
I recently analyzed the costs around peer review for our flagship scientific journal, and a few interesting dynamics emerged. Note that all data are reported on a percentage basis, not a pure dollar basis, because the dynamics are what’s interesting, not the actual spend. The costs involved include system charges, outsourcing fees, stipends, and overheads.
We have three levels of decision — reject without outside review (editorial rejection), reject after outside review, and accept. Working through these three levels provides interesting insights into how expenses accumulate in a peer review system that includes paid editors and an editor-in-chief at the helm, and suggests that peer review approaches that don’t include these elements are cheaper simply because they don’t include paid editors.
The first of these, editorial rejection, occurs for about 20% of the papers we receive. The rate of editorial rejection at other journals can be much higher, as I explained in a post last year. Many journals are becoming more aggressive about early rejections because drawing out the review process is viewed as unfair to authors who really don’t have a shot as well as being expensive. During our editorial review, two paid editors — a Deputy Editor and our Editor-in-Chief — review the paper. Both of these individuals are senior academics in the field and experienced editors. Handling these papers takes up about 15% of the expenses we incur to review unsolicited submissions. The paid editors spend some time on these papers, but not as much as they spend on more promising papers.
Rejection after outside review involves the same initial editorial review, but the manuscripts are deemed interesting and relevant enough to send to external reviewers. Two or more reviewers, selected by one or both of the paid editors, weigh in on each manuscript. Managing this process for papers we ultimately reject consumes 54% of the expenses we incur to review unsolicited manuscripts.
So far, we haven’t found anything we want to accept, and 69% of our expenses to review manuscripts have been incurred. We spend the remaining 31% moving a minority of manuscripts through to acceptance — and this doesn’t include the costs of internal editing, composition, tagging, and storage. These are just review costs. In other words, we spend 31% of our review money on 15% of the papers we receive. These are the papers we truly invest the most in (in addition to all the expenses of publishing and managing the papers over time).
For us, the per-manuscript cost is highest for accepted manuscripts. We spend the most time and money on accepted manuscripts because of the Deputy Editor and Editor-in-Chief involvement and the iterative review cycles — sometimes, a half-dozen rounds of review. Accepting a manuscript without the involvement of editors or any iterations would cost only 41% of the full accepted manuscript cost. Less review costs less.
Journals without editors-in-chief and paid domain-area expert editors may be able to run less expensively because they offer less than half the peer review service of a domain-specific journal with a full complement of editors. These journals offer less robust peer review — they offer some validation, but no ranking of relevance or importance, both of which are vital for clinicians, researchers, and scientists looking to save time and separate the best from the rest.
This has a bearing on another post I wrote recently about the non-equivalency of peer review processes. We all know that running a peer review process with an editor-in-chief and paid associate or deputy editors is more expensive than running a peer review process that relies only on outside reviewers. It can be more than twice as expensive.
But is it better to have paid domain experts as editors? I believe it is. The selection of an editor-in-chief and the inclusion of domain experts signal alignment with a community and ensure accountability. Even when editors-in-chief behave badly, at least there is a person to blame and clear action to be taken, rather than an amorphous pool of blameless faces in a bureaucratic system. An outside peer reviewer may be utilized less if they underperform, but they aren’t accountable in the way an editor-in-chief has to be. In addition, the energy, experience, connections, and backbone editors-in-chief bring clearly make journals more dynamic and reliable. Their participation in the peer review process means the journal is focused on a well-defined area and not diffused into a blob of unrelated content that drifts into search engines but may never reach the author’s presumed or potential audience.
When OA publishers put strong editors-in-chief at the helm, it serves them well — but it costs more. As the Nature News story notes:
. . . some publishers use sets of journals to cross-subsidize each other: for example, PLoS Biology and PLoS Medicine receive subsidy from PLoS ONE, says Damian Pattinson, editorial director at PLoS ONE.
Apparently, PLoS journals with strong editors-in-chief and more traditional review processes can’t support themselves with their own OA fees, but a large journal with lower review costs (partly derived from fewer rejections) can more than offset this problem. As the PEER study of peer review costs noted:
In some instances, dedicated editors in-house pre-select incoming manuscripts, thus reducing the number of those that go through peer review. This reduces the costs of finding reviewers and of managing the review process, but increases internal costs. In order to guarantee reputation on the one hand and cost control on the other, most publishers will have in their portfolio a group of journals with high rejection rates, reputation and impact and a significant group of more accessible journals.
Named editors-in-chief also create barriers to entry in the market — that is, other publishers have to try to find a comparable editor to head their journal if they wish to effectively compete. This may be increasingly important as various operators try to pass off some questionable approaches as legitimate journals, and as OA becomes more competitive. Low barriers to entry only remain low in the face of mild competition. Having someone well-known and accountable at the helm matters. Whether they were put there legitimately is another concern.
Having internal paid editors who participate in approving every manuscript is more expensive, but it’s a vital part of differentiating between peer review practices and estimating costs for publishing programs.
When we talk of costs, we need to keep an eye on how the term “peer review” is employed, while remembering that peer review doesn’t scale with volume and can vary widely in how thorough and complete it is. Language is the door into thought. If we allow people to knock down the door by making the term mean whatever they want, with no clear distinctions, “peer review” can begin to mean nearly anything. And losing important distinctions may be very costly indeed.
Discussion
21 Thoughts on "More Review Costs More — The Dynamics of a Complex and Varied Expense for Journals"
Kent:
I think that when one cuts costs something has to go, and in S&T publishing that usually means quality.
I agree. What low-cost alternatives try to do generally is appropriate the value of the top-quality provider while not providing the same level of service. This was written to analyze where the true differences exist, so they’re not as easy to paint over.
Wonderful analysis. A question about one of the first statements: I don’t think of WordPress or Facebook as “publishers.” I think of the originators and primary financial/risk stakeholders of the content still as the publisher. Could we think of WordPress and Facebook more as one’s “digital printers”?
There have been off-hand assertions by naive publishing people (and the likes of Clay Shirky) that “everyone’s a publisher” now. The link in that area goes to a post I wrote about this last year. I actually do think of WordPress and Facebook as publishers because they are taking the risk — financing the software and staffing the support to make it possible for me as an author to reach an audience. There are services that will make a POD book out of your Twitter, Facebook, or WordPress output, and these are the oxymoronic “digital printers” to me. But if you use one of these digital printers, you as the author are usually taking the risk, you become a self-publisher. Following the risk is really the best way I’ve found to identify the publisher.
I agree with you about the unthinking “everyone is now a publisher” statements, I did not mean to imply that. My view may be quirky, but being publisher must also include a significant risk involving promotion, publicity, and marketing, with the intent to disseminate with some target market in mind. Simply mounting some rant on WordPress or Facebook makes neither the author nor WordPress/Facebook a publisher. The work isn’t published at all–it’s just (IMHO) “out there.”
I think you accomplished your goal.
David’s comment on the gatekeepers – EIC – is most pointed. Also, what is interesting is that if an EIC is not doing the job it becomes evident and then there is that conversation between the publisher and the EIC. Without one, the conversation never occurs and triviality becomes the norm.
Kent, you’ve hit upon one of my favorite topics. I especially appreciated this in light of Patty Matzinger’s comment which was Tweeted, that “Editors are leeches” during yesterday’s BioMed Central’s panel discussion on peer review. I have to question what type of Editors she’s dealt with during her career to make such a comment. Not all Editors are created equal and not all peer review is equal. I often feel as if those kicking up noise about how flawed the peer review process is are the same folks who are watering it down even more. I’ve started to think that this type of less rigorous peer review is leading to the “McDonaldization” of scholarly publications. We’re on a road where everyone can pull up to a drivethru window, buy something off the dollar menu, and get published. Keeping in spirit of being in the “kitchen” while writing this, I’d point out that a quality EIC is like a top chef who makes sure that most items on the menu are up to standards and the publisher/organization is the owner of the 5 star restaurant, paying to keep a roof over their heads, the lights on and for everything else customers take for granted when they reserve their table expecting a pleasant, quality meal.
Having lived through Editor-in-Chief transitions a few times, and periods without an EIC as well (even extended travel can create a period like this), the quote that comes to mind was uttered by a very observant and thoughtful senior editor after an extended trip by an EIC — “It’s good to have you back. Things just work better when you’re around.”
This comment underscores all the points you’ve made — a strong EIC who cultivates a high standard gets things done, and creates clarity around editorial and brand standards. It’s subtle, as well, because there are so many gray areas in editorial work that having a person at the helm keeps you from drifting too far off-course or making inconsistent decisions.
I think that the idea of publishing everything, your McDonaldization, is not necessarily a bad thing. It’s impossible to predict in advance exactly what’s going to be important or useful going forward. And science relies on a tremendous amount of incremental work.
So having outlets for everything at every level, no matter how seemingly trivial, is a plus, particularly given recent increases in search and discoverability. If for some reason that small data point turns out to be useful to someone else, it can be found.
The downside is, of course, that most of what seems trivial is in reality, actually trivial, and not worth the limited time researchers have to keep up with the literature. And that’s where the top chefs as you note come in, creating an invaluable system to help sort by quality and make efficient use of one’s reading time.
David, don’t get me wrong, when it comes to research I’d prefer to err on the side of having too much info as opposed to not enough. However, we hear over and over about “filter failure” and as you say, one of the strongest mechanisms we have to filter out the “trivial” are really good Editors who oversee a really good peer review process. Based on some conversations I’ve heard and things I’ve read one might think this NEVER happens. The “solution” of eliminating EICs and throwing nearly everything up against the wall by publishing nearly everything strikes me as counter-productive.
Interesting numbers, and quite close to what we at Peerage of Science estimated when considering how to price our editorial tools for publishers. Based on several sources, we estimated publisher’s peer review cost per published paper range from 250 up to 750 €, depending first and foremost on rejection rate. The upper end might be higher for journals rejecting more than 90% of submissions (though the five-digit figures cited by Nature are just ludicrous). More selective journals invest more in better tools and better people, seeking to create better filters to achieve better output, its simple as that.
I’d suggest that 54% figure, expense incurred on rejection, is also worth further attention. The great financial success of PLoS ONE can be attributed to reducing those costs, accepting more papers and turning them into revenue centers rather than a cost of doing business. F1000 Research takes things even further, not just reducing the cost of rejection, but doing away with it altogether by collecting revenue before peer review happens. Very clever approaches in both cases.
I wonder about these business models. Eventually one runs out of content or the new content becomes increasingly worthless and many will no longer wish to be associated with the journals. Readership will decline and then what we have is a vanity journal.
Interesting post, but the duality between expensive traditional journal doing more review on the one hand, and cheap open access journal doing less review on the other hand is somewhat misleading. Cheap may imply less review, but expensive does not imply more review.
There are many journals that are expensive but do little costly review (in the sense that they do not have staff editors at all, and at most pay for an electronic platform and half-time a secretary located in a university, therefore not paying its overhead). In my field, most of the journals runned by for-profit publishers are of this kind (those runned by not-for-profit publishers are mostly the same, just cheaper).
This is an important point for anyone who wants to understand why many scientists feel that there is a need to change the publication system, notably its part operated by for-profit publishers.
You’re confusing price with cost. The price a journal charges might have nothing to do with its costs. This post was about costs, not prices.
My point is precisely that I don’t confuse price with cost, as I explicitely state that there exist expensive journals with low costs. In your post, you did not equate strong and expensive peer-review with high price (only with high cost), but this idea may be left to the reader nonetheless. I just wanted to precise that one should be aware that low-cost high-prices journals do exist; I agree that it does not contradict anything you wrote above.
There’s a wrinkle here, in that author-pays OA journals are more likely to have their peer review level tied to their APC levels. They run a cost-plus model. Subscription publishers run a value-based model. As OA publishers battle to be the least expensive, they may also be battling to be the publisher that does the least peer review. Where else can the savings come from?
In an indirect way, could you say that savings may come from launching new publication outlets/product lines? The overhead costs of executive and administrative pay, legal, accounting, office costs could be spread over more (hopefully income producing) products. This would not lessen the costs of each publication outlet’s line item costs.
Yes, but the costs contemplated in my analysis were only editorial costs, not overheads like you’re contemplating (executive management, HR, legal, heat, lights, insurance).
Many publishers are diversifying for precisely these reasons, along with more modest outlooks for their books and journals businesses.
It’s a more complex system than that. There are inexpensive OA journals run by for-profit companies and venture capitalists. There are expensive (both OA and subscription) journals with elaborate peer review systems run by not-for-profits. Many of the journals published by for-profit publishers are owned by not-for-profits and published on their behalf.
I don’t think such a clear line can be drawn, for-profit bad, not-for-profit good. There’s a lot of grey in there.