The practice of pre-publication peer-review of scholarly papers has recently come under attack from a wide variety of sources, ranging from bloggers to The Scientist to The New York Times. Nearly every discussion of peer-review refers to it as a “burden,” and that burden is often described as “overwhelming.”

I’ve always thought of peer-review as a tremendously efficient bargain (review a small number of papers and get back the entire set of literature that’s been filtered and scrutinized at the same level).

How overwhelming is the burden of peer-review, and does the proposed solution of post-publication review offer any relief?

scientific review

The Scientist quotes UCSF biologist Keith Yamamoto, as just one example of the stated burden:

“The culture of having to publish means the burden of papers is just enormous,” Yamamoto says. And the burden of reviewing this glut of papers goes almost entirely unrewarded.

It’s tempting to immediately dismiss the recent set of “advertorials” in the Scientist, conveniently timed as they are with the announcement of  their new partnership with the post-publication review service Faculty of 1000. But even taken at face value, there seems to be little data given to support the concept of “overburdening.” Another Scientist article merely states that there are more articles submitted each year, but neglects to mention that the population of scientists, and hence potential reviewers, is also growing.

How big of a problem is peer-review for most scientists?

A recent study suggests that “unpaid non-cash costs of peer review” undertaken by academics works out to £1.9 billion. That seems like a lot of money, but when one amortizes it across the total number of working scientists (best estimate I can find is around 11.5 million worldwide, sourced here and here), and using today’s exchange rate, it works out to around $256 per researcher per year. Is that a reasonable amount of effort to contribute?

The Research Information Network’s data shows that peer-reviewed journal content is valued as more important than any other source, quoting one researcher who stated, “Anything that isn’t peer-reviewed . . . is worthless.”

What is the value in having your most important information source vetted by experts? Is it worth less than $256 annually to you? Isn’t having the literature filtered in this manner — the time saved from having to go through the unacceptable dross — the very “reward” Yamamoto is seeking above?

I wanted to get a feel for how burdensome peer-review is in my field, biology. In a thoroughly non-scientific study, I asked a dozen biology professors about their peer-review burden, trying to get a good cross section of people at different stages of their careers and at different types of institutions. The vast majority told me they review around 1-3 papers each month. Scientists are under enormous work and time pressures these days, but how much of that can be blamed on reviewing a few papers each month?

Some senior researchers review more papers, often because they’re on the editorial boards of journals, and their burden can range as high as 10 to 15 papers per month. That does seem like a sizable workload, but it’s hard to think of it as an unbearable burden when it’s an entirely voluntary one.  Can one really call a voluntary activity a “burden”? There’s no stigma for turning down an editor’s review request. All professors I contacted said they had no problem with this.  Some well-known senior professors deliberately limit their reviews to no more than a few per month, and in times when they’ve got a heavy workload in other areas, they refuse all review requests.  Do other fields differ greatly from biology, or is this a reasonable picture of science as a whole?

Editors and researchers in other fields, please chime in with comments below and let us know how well my admittedly small sample size reflects things in your area.

If these sorts of numbers are accurate, then peer-review does seem to offer a superb bargain in efficiency. A recent study showed that 50% of biologists use academic journal articles every working day (and another 30% use them “most days”). So by agreeing to review 1-3 articles per month, you’re guaranteed that the multiple articles you’re using nearly every day of your career have been scrutinized and filtered at that same level.

Let’s compare this level of efficiency with that seen for the most commonly proposed alternative, post-publication review, the idea of putting everything up on the web and letting “the crowd” filter things out.

If we start with the idea that information overload is a problem — that researchers are buried in an constant avalanche of papers — then imagine the size of that avalanche in a system where no paper is ever rejected, where everything gets published.

Not only will you be reading more papers, but those papers are going to be of lower quality than those you now read.  One of the key rewards of our current peer-review system is that the criticisms are used to improve papers before they’re published. Time and attention are incredibly valuable commodities. A system that requires you to spend more time reading more papers that are of lower quality is already looking problematic.

One of the main complaints against peer-review is that it delays the dissemination of research results:

Peer-review is too slow, affecting public health, grants, and credit for ideas. . . . Another common frustration among authors is the lengthy time delay between submission of a manuscript and its publication.

Does post-publication review solve this problem? Peer-reviewed journal articles are considered “very important” information sources by 92% of researchers in the study mentioned above compared to 4% giving that rating to “un-refereed articles.” Given this attitude, are researchers going to be willing to read articles that have not yet been reviewed in any manner at all?  Or will they wait, particularly for articles outside of their area of expertise, until a trusted source has posted a review?  That introduces a new delay into the system — instead of waiting for an editor-driven review process, we’ll instead be dependent on a stochastic process.

For that stochastic process to work, it asks participants to review every article they read.  And that’s where the efficiency of the system bottoms out.  Which is a bigger burden — serving as the peer reviewer for 1-3 articles per month, or serving as the peer reviewer for every article you read?

Beyond efficiency, post-publication peer review suffers from a likely lack of expertise and trust. A highly respected journal with a track record for editorial excellence in selecting qualified reviewers is likely to be trusted more than an anonymous commenter who may or may not be qualified:

Many professors, of course, are wary of turning peer review into an “American Idol”-like competition. They question whether people would be as frank in public, and they worry that comments would be short and episodic, rather than comprehensive and conceptual, and that know-nothings would predominate. After all, the development of peer review was an outgrowth of the professionalization of disciplines from mathematics to history — a way of keeping eager but uninformed amateurs out. “Knowledge is not democratic,” said Michèle Lamont, a Harvard sociologist who analyzes peer review in her 2009 book, “How Professors Think: Inside the Curious World of Academic Judgment.” Evaluating originality and intellectual significance, she said, can be done only by those who are expert in a field.

As Phil Davis recently asked:

Is a system that allows anyone to comment on a paper — anonymous or not — really a form of “peer-review.” Where is the “peer” in “peer review?”

Replacing a flawed system with one that’s even more flawed is not an option.

Most proposals for doing away with pre-publication peer-review suffer from “Highlander Syndrome” (“There can be only one!”), the notion that everything must be a zero-sum game, and that if a new layer is added, a previous layer must be removed. In an age of information overload, we need more filters, not fewer. Yes, peer-review can be improved, and yes, if one could actually generate participation, post-publication review could be tremendously valuable. Wouldn’t it be better if these filters were additive rather than having to choose between them?

Even if one assumes that peer-review is an enormous burden, it’s possible to turn it into an important educational opportunity for students. One thing — and possibly the most valuable thing — I learned in my graduate school lab was how to write a scientific paper. The evaluation of submitted papers provides a hands-on opportunity to hone these sorts of skills.

The head of a major research institution recently told me the following about his peer review practices:

One reason I do accept review responsibilities is that I work with lab members to do some of the reviews. This helps people learn the system and how to evaluate papers. Because reviewer opinions are available, I can sit down with the person from the lab and go over how their opinion relates to the other reviewers. It helps them learn to be a better reviewer, understand what to expect from their own papers and how people respond to reviewers comments.

If done right this is a great training opportunity. If a PI just gives a lab member a paper and turns in that review, it can be problematic. However, if they get the review from the lab member and sit down to go over whether it is fair, balanced, accurately deals with strengths and weaknesses etc., and compare it with their own take on the paper, it can be a great learning experience. When I do this and then show the lab member the other reviewers comments after they are available, it can be very interesting. They are often surprised about what was said, missed, or put in the review and are reassured when others found similar issues. I find they also learn a lot from the author rebuttals and revisions the authors make if they resubmit the paper. It really prepares them for how to deal with comments when their own papers are reviewed.

Ultimately, what matters most is the near-universal demand for peer-review as a necessary filtering system:

Our study indicates that many researchers are discouraged from using new forms of scholarly communications because they do not trust what has not been subject to formal peer review… [R]esearchers seek assurances of quality above all through peer review, and that they do not see citation counts, usage statistics or reader ratings or other ‘wisdom of the crowds’ tools as providing an adequate substitute.

In seeking to replace pre-publication peer-review, one must look at the whole picture, at all of the benefits the current system provides, rather than focusing solely on the limited instances where it is problematic or open for abuse. Can peer review be improved?  Of course, but the best bet for the future is adding to peer-review rather than doing away with it altogether.

Enhanced by Zemanta
David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.

Discussion

52 Thoughts on "The “Burden” of Peer Review"

Disclaimer: I am in the employ of F1000/The Scientist

I certainly don’t think peer review is going to go away anytime soon – and nor do I want to see such a thing. It is a system that, no matter how flawed some commentators think it might be, still works. It keeps the barrier to entry sufficiently high that in almost all cases articles passing peer review are considered good enough for scholarly consumption.

However, why couldn’t we have a system whereby peer review, post publication review and crowd sourcing all exist and work together?

Lots of people have a problem with peer review, but I don’t think crowd sourcing is going to be a much better way of sorting papers. As I have said before, crowd sourcing only works when the crowd is several orders of magnitude larger than the body of work they are filtering. There are too many papers and far too few scientists willing to do the work for open repositories to really function in a capacity robust enough to cater for all scholarly literature in my opinion.

The only place I disagree with you David is that I certainly don’t think post and pre pubication peer review are mutually exclusive. I think scientists genuinely want to be able to discuss and highlight papers after they have been published. Not all peer reviewed research is of the same quality and we work in a fluid industry where knowledge does not sit still.

I don’t think we disagree at all–I’m suggesting that exclusivity, choosing only one form of review is the wrong way to go. The more filters, the better.

I do think scientists want to discuss papers, but I question how willing they are to do so publicly. Time, economic and social pressures move these discussions into small, trusted groups, rather than sharing opinions with the whole world.

F1000 highlights notable papers, but I wonder how much participation one would see if reviewers were asked to trash papers they thought were bad. I don’t think you’d see much uptake.

I don’t want to turn this into a discussion about F1000 so I will sign off now – but you might be interested to know that faculty members do routinely ‘dissent’ papers, which often leads to very interesting discussion.

I’m not entirely sure how the figures that lead to a peer review cost of $256 per researcher were arrived at. However, a recent audit of the time spent on peer review by members of my Computer Science department revealed an average of 62 hours per year, with some senior colleagues doing much, much more (for example, a program committee member may spend well over 150 hours just for one major conference). That makes the cost several thousand dollars per year.

I certainly don’t want to undermine the value of peer review, but it is important not to underestimate its cost.

The figure comes from taking the £1.9 billion in peer review labor costs from the study linked above and dividing it by the estimated population of scientists worldwide, around 11.5 million.

It’s an attempt to amortize the costs of peer review across the entire group using the literature. It can’t be applied to an individual scientist as it’s an average. The researcher in your department who does 150 hours likely makes up for another researcher who refuses to do peer review altogether. Also, if a PI has 10 students and those students use the literature but don’t review papers, then likely the PI is doing peer review work to cover his/her students’ contribution (or lack thereof).

Surely ALL scientists are not doing peer review. Likewise, the subset of any discipline able to review said discipline’s work is very small..and some disciplines publish far more than others. This $ figure is so nonsensical as to be harmful!

The number simply represents the estimated costs of peer review amortized across the estimated number of researchers who directly benefit from it: the cost per researcher. It does not represent the amount of work done by any given researcher. Please don’t read any more into it than that because yes, that misses the point and does indeed push things into the realm of nonsense.

David,

I’ve nothing to add, but wanted to state that IMO you have knocked it out of the park on this. My own feelings on this subject are in line with your own, but you’ve expressed things in a succinct and intelligent manner, something I often cannot do. 😉

I also agree with CB, and it seems you do as well, in that these methods of review need not be exclusive.

Those who countenance post-publication peer review fall into two categories: idealists and pragmatists. The idealists point to such things as the wisdom of crowds to make their case. The pragmatists note that changes in the economics of academic publishing make post-pub peer review inevitable (though not necessarily replacing traditional review in its entirety). I have zero tolerance for idealists in any context, but the “inevitabilitarians” appeal to me precisely because the facts seem to support them.

The critical issues are the number of papers written (any evidence that this number is dropping?), the ease of Internet posting (with its many quality problems), and the downward pressure on library budgets. Some papers, probably a growing percentage, are going to seek an audience outside traditional publishing precisely because the number of slots available to authors in traditional venues is not keeping up with the number of papers written. This doesn’t mean that post-publication peer review (memorably called “sneer review” by Otto Barz) is any good, only that it is coming into play despite how good or bad it is.

Thus we have an evolving situation where everyone can agree that traditional matters are better, more efficient, etc., but untraditional methods come to be used anyway. By analogy, if your kid can’t get into Harvard, don’t you stil send him to State U.?

Joe Esposito

The problem may arise because it is a case of “never…was so much owed by so many to so few”.

Editors frequently return to the same luminaries in a field for opinions. So while the average load may be low, this isn’t very meaningful if a small number of people are doing all the work. Senior figures can field >10 requests per week and this is the “burden” to which Keith refers.

Note that many reviewers get their post-docs to help out, but there is no standard approach and the journal is often unaware. Perhaps if this could be formalized in some way, the process would be improved.

Prominent, “big-name” scientists do make more obvious targets for editors seeking a reviewer, and the senior researchers I spoke with generally noted this.

That said, each researcher deals with those requests in their own individual manner. The institutional head mentioned above falls into the 10-15 per month category. Other prominent scientists deliberately limit themselves to 3 or fewer reviews a month. Some do no reviewing whatsoever.

So it’s difficult to generalize about what the exact “burden” is, since it’s 1) entirely voluntary and 2) it varies so much from individual to individual. My numbers were just an attempt to get a rough sense of how much work is being done per person to create the system we now rely on.

I question the study’s claim that peer review is “unpaid.” I am sure that most people, and their funders and employers, consider it an important part of the job, not a theft of time. In fact being invited to review is a major milestone in a researcher’s career. The real problem is a deep lack of transparency in the funding of peer review. This makes cost & benefit and/or policy analysis impossible.

Nice post, David, thank you. One additional point: Editor-driven peer-review at a number of academic journals with data release and availability policies in place also reduces the burden of having to track down the data behind the presented work. Without the insistence and guidance of these journals I can imagine reproducing and expanding upon published work might be subject to further, possibly substantial, delays.

Now, if peer review were an effective means of assuring the quality of research papers, then I totally agree that $256 per researcher per year is a very small price to pay.

So, is there actually any evidence that peer review does assure the quality of papers? I don’t think I’ve seen any. I have, however, seen much evidence of flaws in the peer review process.

Any editor of a decent journal could assure you of its value.

Having seen the reviews of hundreds of papers that were rejected/revised, I’m certain the scientific community is better off without further information overload from incorrect/incomplete studies.

Adam,
Given that there is no supreme arbiter of science, it is fundamentally impossible to assure the quality of a paper.

Authors, at least, believe that peer review process increases the quality of their own work. See the 2009 Peer Review Survey:

Almost all researchers (91%) believe that their last paper was improved as a result of peer review Fig 2.6 & Fig 2.18; and the biggest area of improvement was in the discussion.

It’s all a bit anecdotal, though, isn’t it?

No doubt almost all homoeopaths believe that their sugar pills help their patients to get better, but that doesn’t mean they’re right. The evidence for peer review sounds to me like it’s of a similar quality.

One of the most interesting articles I’ve ever read on peer review is now over 30 years old, but still makes worrying reading for anyone who thinks that peer review might be useful. You can find it at http://www.springerlink.com/content/g1l56241734kq743/

If self-report data is anecdotal, Adam, then you’ll have to discredit most of the research done on human subjects. And by your argument, doctors and medical researchers should also stop asking how their patients are doing because the best they can offer are anecdotal and irrelevant responses.

Could you recommend a better way to determine the value of peer review?

Can you imagine what certain special interest groups, who dislike the results of good science, would do in the absence of peer-review? … Suddenly there would be equal numbers of “Scientific” papers “disproving” evolution, climate science, etc. as there are now providing evidence for those things.

Not necessarily, because Editors could still make the decision what to publish, just as they do now. Peer review is a secondary filter, not the primary one. Eliminating peer review in no way implies that everything gets published.

To me the deep issue with peer review is that of delay, which we know very little about. Proponents of basic science claim that the social ROI on research is enormous, on the order of 50% of all economic growth. But ROI is time sensitive, meaning that simply speeding up the return, without otherwise changing it, can dramatically increase the ROI.

Given the new age of instant global communications, taking months and months to work through peer review seems an incredibly expensive anachronism. Assuming this simple model is correct (which we do not know), is hard to believe that the peer review filter is worth its negative impact on social ROI. Eliminating peer review could be worth trillions of dollars.

I would love to see some industry-wide data on this issue. The prominent medical journals with whom I’ve worked nearly all have average times from submission to first decision of less than 3 weeks. The time from resubmit to final decision is usually much faster – less than a week. So at most, excluding the time the author takes to revise his or her work, I don’t see any reason why the peer review process should take months and months – and indeed at many journals it does not. This seems like a business practice problem and not an inherent problem with peer review.

There is another problem lurking here, however, which is time from final decision to publication. Many journals are still scheduling issues months in advance. There is no earthly reason why every paper cannot be published its final form within a week of acceptance.

Revision time generated by peer review is part of the peer review delay time. Backlog is not, so that needs to be addressed separately, unless the backlog is to cushion the uncertainties in revision time. Unfortunately a lot of the data combines both.

Separating and publishing this data for every article might speed things up a bit. It is basic logistics, but nobody likes to have a clock put on them.

It would be interesting, and maybe even motivating, to get some ROI impact assessment for these kinds of delay. Some of the economists working the ROI problem are centered at http://www.scienceofsciencepolicy.net
So far I have been unable to get their attention.

There are indeed two separate issues here, delays caused by the peer review process and delays caused by the journal production cycle.

For the latter, I wonder how much of that is tied to the print versions of things. How much of the lead time comes from having to get issues printed and shipped? How much of the delay is due to limiting the length of print issues, keeping newer accepted articles back a month to keep the page count down?

As print starts to disappear, will the lag in publication disappear with it?

I can’t agree, David W, with the issue of how long you have to wait for more information. Firstly, if the information is incomplete or incorrect, then rushing off with wrong information is a significant problem. Peer review does aid in spotting such problems. Secondly, there are ways of getting more recent, unpublished information without industrial espionage, and that is by going to a good conference and reading the posters. Or borrowing the conference abstracts from someone who did. Usually, the material there is either in press or in review.

Conferences are certainly more timely than journal articles, but people should not have to go to them in the age of global communication. That is why we aggregate and publish proceedings: http://www.osti.gov/scienceconferences/

But I doubt that the polishing that peer review sometimes provides is worth the delay it causes. The function of peer review is filtering, not editing. Articles are merely long abstracts, to give people a sense of the results. Anyone with a serious interest contacts the author.

In the interest of speed I would abolish the editing side of article review, along the lines of proposal review. Clarity is not the problem.

I strongly disagree. As an editor, perhaps the level of clarity and accuracy that peer reviewers regularly add to our papers is more obvious, as I see all the original submissions as compared to the final product that’s published. I want the journal article to be the most accurate, final word on that particular study and think that’s worth a few weeks’ delay.

For time sensitive information, there are certainly venues for rapid release. Pre-publication archives like arXiv are readily available for the rapid dissemination of rough drafts of a paper. There are fields where this practice has caught on strongly, and others less so. But the presence of these archives allows both needs to be served. Again, it’s better to supplement peer review than to do away with it altogether. In addition, journals like PLoS One are experimenting with finding a better balance between the two, and their peer review system is an attempt to speed the process.

Given the results of the studies linked in the posting above though, the scientific community strongly believes that the “polishing” is worth the effort.

Making pre-publication mandatory for federally funded research would help. But the question is whether peer editing is worth the delay?

In Science I see times from first submission to final acceptance of 3 months, not a few weeks. Given the sequential nature of science that is a lot of delay. What is having 3 non-editors, who are not familiar with the work, edit the submission adding that is worth this much delay? I don’t see it.

The article does not have to stand on its own; it is basically just a news item. Perhaps journals do not understand this.

You would probably need a look inside the “sausage factory” to understand how much manuscripts change from initial submission to final acceptance, that might influence whether you thought it was worth the delay. It varies quite a bit from article to article, journal to journal.

As a journal editor, the delays I encounter are more in the process of signing on reviewers than in the review process or the revision after review. Prospective reviewers tend to wait a week before negatively responding (if they respond at all). That’s my biggest slowdown. If researchers really felt that expediency was a major priority, they’d likely change their behavior, and respond to editors immediately, and probably not wait until the deadline (or after the deadline) to submit their reviews. The slow pace of peer review perhaps accurately reflects where the need for immediate publication of results ranks in the busy life of a scientist.

There is a tradeoff between speed of publication and quality/efficiency, as mentioned in the article above. If you eliminate peer review, then you consolidate power over most fields into the hands of a small number of editors who are likely even less familiar with the work in question than the peer reviewers. Or else you just publish everything with no review whatsoever. And then you end up having to sift through vastly more papers of vastly lower quality. I’m not convinced that the ROI saved from rapid publication would outweigh the costs of time spent slogging through the slushpile or chasing after poorly designed experiments or unsupported conclusions.

Likely it’s a balance that each field would need to decide for itself. Given the results of the study mentioned above, the vast majority of researchers favor quality assurance over expediency. Again, there’s no reason it should be a zero sum game. There are lots of outlets for fields where the balance tips the other way (more announced just yesterday).

One way that the system of peer review for journals would change quickly is if publishers actually had to pay fees to have the reviews done! Given that libraries are in no mood to pay any more for subscriptions than they do now, adding that extra cost would lead to some kind of radical change!

The discussion here has focused solely on journals, however. Remember that scholarly monographs also use a system of peer review, but it is different in several ways. First, reviewers are actually paid for their reviews by publishers. Second, there is both pre- and post-publication peer review of books, the latter done in the book review sections of journals, and neither is “crowd” reviewing. Journal book review editors select their expert reviewers just as publishers’ staff do. Third, the review process, at least at university presses, also involves faculty editorial boards that bring a different level and kind of expertise to the table. The outcome is a complex mixture of inputs from three sources: the editorial board, the external reviewers, and the publisher’s staff editors. This process has a unique value that is often overlooked when people talk about peer review in an academic context. Perhaps the addition of different types of peer review in the journals arena might result in a more complex process like the one that now exists for books.

There are publishers that have tried paying reviewers – e.g. The Company of Biologists.

The practice was abandoned because even a trivial payment of $25 per review ends up costing hundreds of thousands of dollars when you multiply it by several thousand submissions – and ultimately the cost ends up being born by subscribers.

Meanwhile the overwhelming feedback from reviewers was that payment is unnecessary because they view reviewing is part of their “duty” to science.

In Clay Shirky’s new book “Cognitive Surplus,” there’s a great section on why paying for certain things actually devalues them. When people feel altruism and generosity, the “priceless” quality is important. Put a price on it, and it actually demotivates people.

Behavioral economists (like Dan Ariely and others)have dealt with paradoxes that occur when paying someone actually reduces their willingness to complete a task.

Apparently we work under two cognitive frames:
1) Moral calculations (i.e. volunteering)
2) Economic calculation (i.e. fair pay).

In the case of paying to review a manuscript, most publishers are unable to pay reviewers a fair wage for their time, which is why a volunteer system seems to work more effectively.

I don’t know if anyone has studied this, but there seems to be an alarming trend in the international peer review “balance of trade”.

The rapidly growing volume of manuscripts submitted from Asia is being peer reviewed by a fixed/shrinking group of Western peer review volunteers. Publishers need to foster and encourage peer review culture within Asia. If not the whole system will fall out of equilibrium as research output grows in Asia.

Richard.

$256/year is a incredible underestimate of what the cost is for a non-academic researcher to review a paper. (HINT HINT! YES THEY DO EXIST!!!)

Maybe if it was a 2-page communication and I only reviewed just one a year, I could do it and it would not cost my employer more than $256, but given that I average a paper a month, you need to jack that figure up by a factor of 20 or 40 or something.

Despite the costs, both I and my employer are fully supportive of peer-review.

Again (see other comments above), the $256 is meant as an average. Your work likely covers other scientists who do no peer review at all. It was merely meant as a way of getting my mind around the numbers, how much is the cost of the effort when amortized across all of the people who directly benefit from the effort. What is the per-researcher cost of peer review, not how much do you (or any individual) contribute.

Peer review is a multi-headed beast. For different journals it serves different purposes, and it has many strengths and weaknesses.

On the good side of things people do indeed seem to like peer review and there is general agreement that it improves the literature, yay!

Some of the problems include inequitable credit for amount of peer review undertaken. I’m unconvinced by the hand waving argument on ROI. It would be much better were there a system in place to properly reward good peer review. One could then also see costs and impact in a clearer way. I’m not proposing any solutions to this.

Another problem is the waste in resubmission and re-peer review. A pooled peer review system across every journal would cut down on a lof of inefficiency in the work required to publish, but that’s unlikely to happen soon.

It’s not clear cut, I agree with David that it’s not possible to say that one should replace it entirely with another system with different flaws. I think we could be smarter through, and work towards an augmented peer reviewing system.

I agree that redundancy is a problem because of the delays it causes, but in some ways it’s also a virtue of the system. If your paper gets unfairly rejected from Science, you can get a fair shake with a clean slate from Nature. That serves as something of a check and balance for when the system goes awry. If you’ve instead got a trail where the one unfair review follows your paper around tarnishing it, you may never get a fair and unbiased chance for publication.

It would help if we had good data on why sequential submission occurs, and how big a problem it is, but the data needs to be blind.

One reason may be that less important papers work their way down to their proper level. But another may be that novel, important papers have trouble finding a home, due to journal narrowness.

Peer review is a fantastic bargain. This article makes a great argument for the bargain that it is. I think it would be great to work on calculating the cost of crowd-sourcing to this same group of scientists. I think it would be incredible..and the results would be non-reproducible. That seems ironic to me.

One thing I think worth considering as well is that as one is “required” to read everything and then contribute, the feedback and commentary that will lead to ranking will rapidly degrade.

What would a valuable contribution look like if what determined it was valueable was the most number of “like” clicks or diggs? It seems that there is little chance for anything more substantial to happen and the web has converged on a less is more.

Even if you don’t allow for such terse “voting” the web generally shows that when offered a chance for commentary, the quality of the commentary will degrade inversely relative to the length of the work and skills required to understand the work. I can’t prove this but we can all point to articles where all sides of an issue were explored before drawing a conclusion based on data presented, but the comments focused on some reaction to a tertiary point (and then those spiraled out of control).

And of course one of the main benefits of peer review is that the peers are known to the publication and thus have an established reputation. None of that exists on the web (as I post this comment anonymously).

A huge value in the peer review system is also getting a concentrated amount of more detailed and written (anonymous) feedback–even if as the author that is not the most enjoyable part of the publication process.

Peer review can be good and bad. In fact, individual peer reviews can be either good or bad. It is the job of the editor to distinguish between the two.

However, at least in the field of medicine, reviewers are being compensated above and beyond the community expectations.

CME credits are granted by many journals. I have also heard of peer reviewers being given complimentary subscriptions to journals for a period of time (though I can’t quote a journal for this one, sorry!).

Excellent post, which I agree with wholeheartedly. But it does leave unaddressed the question of what to do about the apparently-widespread perception that the current peer review system is breaking down due to too many mss chasing too few willing reviewers. That perception likely has some truth behind it, if for no other reason than scientists face strong incentives to publish but weak incentives to review.

My colleague Owen Petchey and I have proposed what we believe is a workable solution to this problem: oblige authors to “pay” for their submissions with credits called “PubCreds”, earned by performing reviews. For a link to our article, and to a blog and petition aimed at supporting and developing the “PubCreds” idea, see: http://www.ipetitions.com/petition/fix-peer-review/blog

Subsequent to publishing this idea, we learned that an economics journal from Berkeley Electronic Press already runs a version of this system. So it’s clearly feasible; the question is whether it’s worth scaling up.

The first question that needs to be asked is whether there is indeed a “widespread perception” that peer review is breaking down. For those deeply involved in the issue, it seems to be everywhere, but for the working scientist, it’s not as pressing a concern. The dozen scientists I spoke with in writing this article didn’t raise any such concerns, one going so far as to demand to know “what idiots are trying to do away with peer review?”

From the recent study RIN conducted, their data had 26% of respondents thinking that “Existing peer review processes will become increasingly unsustainable.” That’s a higher number than I would have thought, and there’s certainly a difference between “becoming increasingly unsustainable” and breaking down completely. The number is likely somewhat biased by a strong positive response from those who regularly use Web 2.0 technologies, where this discussion is often at the forefront.

Regardless, I do agree that a system that provides career credit for peer review work would be a good thing. Not sure if PubCreds is the way to go though, and we should have an opportunity to discuss the proposal in detail in the near future in The Scholarly Kitchen.

Comments are closed.