Paper for Clostridium tetani
Image via Wikipedia

In response to a recent post about how ongoing digital costs are changing some fundamental assumptions about publishing economics, requests popped up to explore the cost of rejecting papers. And while at first I thought about the mechanics of rejecting papers, the transactional aspect soon shifted in my mind to something more basic about how publishers establish boundaries.

The cost of peer-review was recently covered by David Crotty here, in response to a study pegging the economic value of free peer-review at £1.9 billion. But  the cost of rejecting papers isn’t equivalent to the aggregate value of peer-review, which was posited as hours of volunteer effort translated into wages. The cost of rejection I’m talking about is more tangible and calculable. It has a clear budget. It’s something we pay for year after year, a cost of doing business. And the effects of these costs are felt at the local or journal level, first and foremost.

How do we begin calculating the cost of rejection?

Let’s start with the expense of the editors. If your editors expend the same amount of energy, on average, reviewing a paper that gets rejected as one that gets accepted, then the math is pretty straightforward. If there is an initial screening process that culls a percentage of uninteresting manuscripts with little time or effort, then you can probably factor that percentage of manuscripts out of the calculation entirely — or you can weight them somehow. Other arrangements in between or in addition to these two options could easily be boiled down to a mathematical approximation.

Once you have the framework established, it’s easy to run the numbers since the cost of rejection has an inverse relationship with acceptance rate. If the overall editorial budget (editors, deputy or associate editors, and some editorial board costs) is $1M (not including composition, copy editing, and so forth), the journal has a 15% acceptance rate, and the editors don’t do much after peer-review decisions are rendered, then the cost of rejection would be $850,000/year. If certain editors are more active throughout the publication cycle, they have to have their time allocated pre- and post-acceptance accordingly.

Of course, some software and administrative costs exist in any scenario — the licensing of online submission and manuscript tracking software in addition to staff to run it and monitor submissions and author communications. If you’re running an installation of a major online submission system at the rate of $50K/year, then 85% of that cost ($42,500) could be allocated as the cost of rejection. Similarly, 85% of the cost of manuscript processing staff could also be factored in.

In this simplistic scenario, a journal with a $1M editorial budget is spending more than $892,500 to reject papers, plus a majority of the salaries of the staff processing manuscripts. That’s a financial investment the journal has to recoup by publishing the remaining papers.

The cost of rejection has certainly been going up, since the number of submissions has generally increased over the years. Nearly every journal is rejecting more papers than it was 10 years ago.

Interestingly, the cost of rejection for an author-pays or page-charge journal is theoretically higher. Not only does the rejection process take a share of editorial budget, software, and staff, it also creates a clear opportunity cost for an author-pays or page-charge journal — a paper rejected is a paper not charged for.

No matter the business model, the revenues of any publication have to cover a fair amount of “not publishing.” The same goes for book publishers with their slush piles and failed acquisitions. It’s never easy making just the right thing for an audience.

Efforts to reduce these costs of rejection by creating collaborative review networks are, I think, doomed to fail. They assume peer-review is somehow equivalent between journals. This just isn’t true. Articles are rejected for a lot of subjective reasons. Many journals take different stances relative to their markets. They each have editorial personalities, reputational goals, and internal cultures. They each reject for different reasons.

And these reasons are often, consciously or unconsciously, strategic.

Seen in this light, rejection isn’t about the quality of a paper qua paper, but one organization’s current opinion relative to its own definition of novelty, interest, value, worth, and quality. You could recast “the cost of rejection” as “the cost of differentiation.” And because rejection is about being different, comments on a paper rejected from one culture are probably not too useful to evaluators in another culture.

Rejection is a route to definition.

We’re an industry driven by filtering. But filtering is a tool of competitiveness, a method of brand definition, and a path toward editorial distinctiveness.

The cost of rejecting papers is about more than just filtering papers — it’s the cost of establishing an identity.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

16 Thoughts on "Does Rejecting Papers Amount to More Than Just a Transaction Cost?"

Well said Kent. A few thoughts…

I’m not sure the numbers are as clear as you’ve presented here. As you note, your figures assume an editor does nothing other than vet manuscripts, which is generally not the case. Time is spent on a variety of other activities, recruiting authors, commissioning reviews, even seeking out a cover image for that month’s issue. So the percentages are probably a bit lower, but your point still stands as to the huge level of money spent on rejection.

Determining the cost of rejection for a journal like PLoS ONE is perhaps murkier than you suggest as well. FIrst, how much of the work is done by volunteer editors, thus incurring no cost to the journal? Then, what percentage of articles actually goes through the peer review process, as opposed to an immediate screen and reject by an editor if the paper is completely inappropriate? If they have a 30% rejection rate, but they’re putting a higher percentage of papers through the full peer review process than other journals, then are their costs higher on a percent by percent basis?

Agree entirely with this perspective, especially the point that editorial filtering is strategic and not simply a matter of sorting the good from the bad. There is an irony in all this, and that is that superior papers actually cost less to the community as a whole than do papers of less obvious merit. The “bad” papers travel from journal to journal, eating up more and more editorial resources along the way. The “good” papers get accepted more quickly, often by the first journal they are submitted to. If there is a way to resolve this, I would like to know what it is.

Coincidentally, I just reviewed one of the “bad” papers you mentioned and questioned why the editor sent it out for review in the first place.

Given the mismatch with the journal’s scope, and the fact that the data used in the analysis was old, I imagine that this manuscript has been rejected several times by other journal reviewers.

In cases like this, I often wish there were more friction in the system (or a more discerning editor). Electronic submission systems have made resubmission much faster and cheaper than the old, slow international postage model.

For some research questions, there are few “experts” to review a paper and a reviewer may see the same manuscript several times as it cascades down the chain of relevant journals. I’ve seen this personally many times. Thankfully, I save my reviews and thus the time to review a second or third time is not as onerous.

Its often interesting to see if and how the manuscript changes each submission. Often, honorary citations are added to the editor and members of the board in an attempt to flatter.

I agree with your post, but would have loved to read how submission charges play into this. Theoretically, wouldn’t they discourage submissions and help fund the peer-review process?

Interesting post. Journals that employ professional editors also send them out to scientific meetings, which on some level is a cost that adds to the rejection equation (both sending the editors to the meetings, and rejecting the possibly increased number of submissions that result!)

Joe is right to emphasize the systemic nature of the costs of rejection, as poorer papers absorb more time as they make their way through the food chain. The same is true for scholarly monographs. Unlike journal publishing, though, more time in book publishing is spent up front by staff editors working with authors as they develop books, and probably more books are commissioned than journal articles also. Thus, calculating the costs of rejections for books might well be different than the costs of rejections for articles. On liblicense, by the way, i posed a question to Hindawi about whether authors of accepted papers were being charged more because of the increasing costs of rejection as more papers were submitted. I never got a clear answer….

Call me a naif here, but I’d like to wistfully hope that editors spend much more time on accepted manuscripts than rejected ones.

We claim that the journal process (peer-review and editorial) is supposed to improve papers. That’s where journals “add value” over simply posting research results. The value shouldn’t simply be in the choosing process, but also in the revision/editing process.

Of course experience tells me that in fact, many do not.

You spend more time on an accepted manuscript than an individual rejected one, but if you’re rejecting 9 manuscripts for every one you accept, you may end up spending more total time on the rejects, just because of the sheer volume.

Even if editors did spend more time on good papers, the cost to the COMMUNITY, as distinct from the cost to an individual publisher, is higher for bad papers than for good ones because bad ones run through multiple publishers before they get accepted, if they get accepted at all. This is a “tragedy of the commons” theme.

Joe Esposito

I would modify Joe’s comment slightly by saying that the really bad (or inappropriate) papers (like the really bad or inappropriate book manuscripts) take very little time at all, and the truly excellent ones take some extra time but not all that much. It’s the ones that are good but not great AND that are perceived to have the potential to become excellent with more revision that are the MOST time-consuming. And not only may they go through multiple publishers but also multiple rounds of revision. They can be huge time sucks, and even if they constitute only 25% of submissions, they can take up as much time as all the rest combined.

Comments are closed.