Scientists: Are we producing too many?
Image by mars_discovery_district via Flickr

In today’s other post, Phil Davis takes a critical look at PubCred, a proposed solution to perceived problems with our current peer review system.  Though it has some merit, the overall scheme is flawed, and in many ways resembles other recently proposed solutions for fixing what’s “wrong” with science.  It creates a system that goes beyond merely fixing a flaw, a system that takes on a life of its own, becoming a career goal rather than an efficient means of getting researchers back on track.

Phil’s analysis of PubCred accurately points out the main flaw in the system — the notion that expertise is democratically distributed.  It’s not.  As an editor and as a reader, I want articles that have been reviewed by the best possible reviewers, the smartest, most relevant experts.  And that often means that a small proportion of any given research community is going to perform more than their fair share of reviews.  It’s not an ideal system, but it’s one that that’s better than the alternative, which allows unqualified reviewers more say and results in a lower quality of review and the publication of poorer papers.

PubCred would concentrate far too much power in the hands of the editors who choose peer reviewers.  Arguments have recently been made, particularly in the peer review policy of PLoS ONE, that editors already hold too much power and that they shouldn’t be making value judgments about what should and shouldn’t be accepted for publication.  The PubCred system takes this one step further, allowing editors to essentially decide who’s allowed to submit papers by strictly apportioning review assignments. An editor of the one journal in a small field could conceivably completely control progress and funding for that field.

It’s a system that penalizes productivity.  It forces labs to limit the results they publish.  One has to be stingy with one’s PubCreds.  Perhaps some projects won’t get published in favor of spending one’s precious PubCreds on a different project that seems more high profile.  This seems wrong in the era of abundance.  It means burying research results, results that may in the long run prove more important than the ones that seemed bigger in the short term.

Another flaw is that the proposed system favors labs who spend lots of time peer reviewing rather than labs that spend more time doing research. And that’s where it starts to resemble proposed “karma” systems for measuring participation in Web 2.0 activities like blogging and post-publication peer review.

PubCred goes beyond merely rewarding those who review — it makes reviewing an absolute necessity, a requirement for having a career.  It essentially starves out those who spend more time at the bench, and becomes an end unto itself. The biggest peer reviewer will have the strongest publication record.   He’ll be able to use his extra PubCreds to get coauthorship  on papers where he’s done nothing other than provide the PubCreds.

It’s yet another system designed to drive researchers away from the bench, to force them to spend their valuable time doing things other than experiments.  While these proposals are all well-meaning, there’s an air of desperation hovering about them, an admittance that science is an increasingly difficult occupation.  Success is based on achievement, on discovery of new knowledge.  For those having trouble competing on that level, systems that offer similar career credit for other activities (where they excel) seem like an easy way around surmountable obstacles.

But participation in the community, either through peer review or commenting or whatever, is not as important as doing actual research. These sorts of things are peripheral activities, done in support of the main goal, not goals unto themselves.  And PubCred is a system that makes research secondary to community participation. No reviews means no research output, and that’s putting the cart before the horse.

It’s unclear that any real crisis in peer review exists (the majority of researchers in this study felt otherwise), but even if one accepts that there is a crisis looming, the solution can’t lose sight of the actual goals of scientific research.  We want our scientists to spend more time doing experiments, more time uncovering knowledge that will improve all of our lives.  Being a scientist is already an extremely difficult way to make a living.  Let’s find solutions that drive research forward instead of slowing it down.  Credit should be awarded for peer review work, but it shouldn’t eclipse the credit given for actual research.

Enhanced by Zemanta
David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.

Discussion

10 Thoughts on "When Solutions Take On a Life of Their Own"

What really gets me with this idea is that it is entirely based on the flawed notion that every scientist is worth exactly as much as the next one.

We need a big community so that there are enough mediocre scientists to pick up the slack left from those who need to devote all their time to bleeding edge research.

If this system was introduced, research would slowly be dragged down as everybody got their “turn” to publish regardless of quality.

In addition to the important point you raise about what should be the central activity of a researcher, there is also the problem that reward systems like PubCred can actually have undesired effects. Nicely described by Clay Shirky in his book Cognitive Surplus, I wouldn’t be surprised if researchers see peer review as an obligation in order to publish rather than something they do because of the intrinsic benefit the see in it. As a result, researchers might not only do the absolute minimal number of reviews, but also not but their best efforts into it.

You make an important point where you say ‘it’s unclear whether a crisis in peer-review exists’. This is something I hear from many on the fringes (often with other ulterior motives) but rarely from scientists themselves.

What scientists do complain about is the increasing peer-review burden – the number of papers is going up faster than the number of scientists. This trend is a consequence of ‘publish or perish’ and could be countered by tenure committees, etc. focusing on quality not quantity.

Is there a crisis in peer review? I don’t know what the convincing evidence would be for such a broad statement. Some of my former consulting clients, however, have expressed alarm. (Note that I say “some.” I am not generalizing here.) The pattern i have seen (among SOME) is that a STEM publisher uses mostly reviewers from the developed world even as the number of submissions from developing nations grows rapidly. These clients are struggling to find reviewers from every country, but not always with success. I suspect that this discrepancy will even out over time.

Joe Esposito

I appreciate your thought David — they’re very much like my own reaction, at first.

But to the question “Is there a crisis?”

Clearly in the sense of a collapse of scholarly publishing, no, there’s no imminent collapse.

But …

We already have a system by which reviewers get credit that translates into higher likelihood of acceptance or other favors. Editors like dependable reviewers, and solicit reviews and perspective pieces from them, and typically treat their manuscript submissions preferentially. The rewards are “informal” — which in practice means arbitrary, erratic, and abused in ways that are not transparent. The system disadvantages some groups — most notably women, as has been well documented.

And because it is unpredictable, the system discourages some scientists from saying “no” to requests to review — even if they are already overloaded. Who knows if an editor will take a “no” the wrong way? How can a researcher know if she had done enough to have a good review reputation? It’s completely opaque, varies by journal, and has no obvious relation to scientific quality.

I agree that one’s experiences are going to vary widely from journal to journal.

But I’m not sure there’s any real negative treatment incurred from saying “no” to a review request. I know I don’t really have the time or the attention or the motivation to track and punish negative responders. The people who annoy me are those who take a week and a half to respond. If you respond immediately with a “no”, it helps me keep things on track rather than waiting around.

I’ve asked a few other journal editors and none that I can find hold grudges against non-accepting reviewers. We all understand how ridiculously busy a researcher’s schedule can be. One of the things I asked the (admittedly small group of) researchers mentioned here was whether they felt there were any negative repercussions for turning down a review request and none felt that way.

So it’s unclear to me if there are really a lot of people accepting a huge workload out of fear.

I totally agree that the number of people who complain of “too much” review responsibility is a small proportion of the total. But we’re looking for factors to explain these few. I don’t assume that their beliefs about editorial decisions are realistic.

Neither do I assume that editors’ self-reports are necessarily accurate. Myself, when a potential reviewer turns down an invitation, I’m much less likely to contact them in the future. But nothing offends me so much as someone who turns down the invitation to review, and then trashes the article in the press after it’s published. That’s the sort of thing that has really surprised me from the editorial end of the process.

Comments are closed.